Wednesday, November 28, 2007

Oracle E- Business Suite

Oracle E-Business Suite is the industry's only complete and integrated set of enterprise applications, working together seamlessly to streamline every area of your business—from sales, service, and marketing, through financials and human resources, to supply chain and manufacturing. Oracle E-Business Suite is your fastest path to high-quality enterprise intelligence, bringing your company a true 360-degree view of your finances, your customers, and your supply chains, so you can make faster, better decisions and grow profitability in a competitive marketplace.
Application software typically automates only departmental business processes. Oracle
E-Business Suite is different; it automates all parts of your business. From developing, marketing, selling, ordering, planning, procuring, manufacturing, fulfilling, servicing, and maintaining, to handling finance, human resources, and project management—Oracle E-Business Suite provides a comprehensive and integrated offering. In the past, you had to choose between an integrated suite and “best of breed” for rich functionality. With Oracle, you can now have an integrated suite built on unified information architecture—with the functionality you need in each individual application. These applications connect business processes within and across departmental, geographical, and line-of-business domains. With Oracle E-Business Suite’s depth of product functionality and breadth of product offering, you can take your business further by automating processes across the enterprise.
Oracle E- Business Suite - Industry Applications
Oracle E-Business Suite 11i.10 offers over 2,100 new capabilities, half of which meet specific industry needs, including:
Financial Services: SOP documentation and auditing for compliance with Sarbanes-Oxley and other regulations
Healthcare: Medication administration, patient encounter-specific financial information, integrated patient care and operational intelligence
Manufacturing/High Technology: Option-dependent sourcing, automated spare parts return and repair processing, international drop shipments, distribution planning

Tuesday, November 20, 2007

Oracle Application 11i File System

The COMN Directory
The COMN or COMMON_TOP directory contains files used by many different Oracle Applications products, and which may also be used with third-party products.
The admin Directory
The admin directory, under the COMMON_TOP directory, is the default location for the concurrent manager log and output directories. When the concurrent managers run Oracle Applications reports, they write the log files and temporary files to the log subdirectory of the admin directory, and the output files to the out subdirectory of the admin directory.
The html Directory
The OA_HTML environment setting points to the html directory. The Oracle Applications HTML-based sign-on screen and Oracle HTML-based Applications HTML files are installed here. The html directory also contains other files used by the HTML-based products, such as JavaServer Page (JSP) files, Java scripts, XML files, and style sheets. Rapid Install and the AD utilities copy the HTML-based product files from each _TOP directory to subdirectories in the html directory.
The java Directory
The JAVA_TOP environment setting points to the java directory. Rapid Install installs all
Oracle Applications JAR files in the Oracle namespace of this JAVA_TOP directory. The java directory also holds third-party Java files used by Oracle Applications, as well as other zip files.
The portal Directory
The portal directory contains the Rapid Install Portal files. The Rapid Install Portal is a web page that provides access to post-install tasks that may be necessary for your installation, plus server administration scripts, installation documentation, and online help. Using a browser, you can view the Rapid Install Portal after you run Rapid Install.
The temp Directory
The temp directory is used for caching by some products such as Oracle Reports.
The util Directory
The util directory contains the third-party utilities licensed to ship with Oracle Applications. These include, for example, the Java Runtime Environment (JRE), Java Development Kit (JDK), and the Zip utility.
The scripts Directory
The scripts directory contains application tier control scripts such as adstrtal.sh and adstpall.sh, which are located in the subdirectory.

Sunday, November 18, 2007

Oracle Application Architecture

Application Architecture
• Building a new application
• Registering Your Application
• Application Directory structure
• Defining Application Base path
Table registration
• AD_DD package covers Registering Tables and its sequence
• Views and Registering columns using AD_DD package
• Detail Example of registering table and using it in DFF value set
The TEMPLATE Form
• Overview of the TEMPLATE form
• libraries in the TEMPLATE form
• special triggers in TEMPLATE form
Creating new forms
• Copy TEMPLATE to start new forms creation
• adding property classes to different objects
• registering a form
• registering form functions
• creating menu of functions
• Creating responsibility
Master-Detail blocks
• Master-detail blocks
• summary detail windows
• Row Lovs
• Query windows and who information tracking
• Fmb having an example of Master Detail Block
Message Dictionary
• Message dictionary overview
• Defining message for your application
• Message content standards
Flexfields
• Overview and Benefits of Flexfields
• implementing key Flexfields
• implementing descriptive Flexfields
Concurrent Processing
• Concurrent processing overview
• Defining concurrent programs
• Executable types an example of commonly used executable types
• Defining Request sets
• Submitting your concurrent programs

Saturday, November 17, 2007

Data Pump Components

Oracle Data Pump is made up of three distinct parts:

The command-line clients, expdp and impdp.
The DBMS_DATAPUMP PL/SQL package (also known as the Data Pump API).
The DBMS_METADATA PL/SQL package (also known as the Metadata API) .

The Data Pump clients, expdp and impdp, invoke the Data Pump Export utility and Data Pump Import utility, respectively. They provide a user interface that closely resembles the original export (exp) and import (imp) utilities.

The expdp and impdp clients use the procedures provided in the DBMS_DATAPUMP PL/SQL package to execute export and import commands, using the parameters entered at the command-line. These parameters enable the exporting and importing of data and metadata for a complete database or subsets of a database.

New Features In Data Pump Export and Import

The new Data Pump Export and Import utilities (invoked with the expdp and impdp commands, respectively) have a similar look and feel to the original Export (exp) and Import (imp) utilities, but they are completely separate. Dump files generated by the new Data Pump Export utility are not compatible with dump files generated by the original Export utility. Therefore, files generated by the original Export (exp) utility cannot be imported with the Data Pump Import (impdp) utility.

Oracle recommends that you use the new Data Pump Export and Import utilities because they support all Oracle Database 10g features, except for XML schemas. Original Export and Import support the full set of Oracle database release 9.2 features. Also, the design of Data Pump Export and Import results in greatly enhanced data movement performance over the original Export and Import utilities.

The following are the major new features that provide this increased performance, as well as enhanced ease of use:

The ability to specify the maximum number of threads of active execution operating on behalf of the Data Pump job. This enables you to adjust resource consumption versus elapsed time. This feature is available only in the Enterprise Edition of Oracle Database 10g.

The ability to restart Data Pump jobs.

The ability to detach from and reattach to long-running jobs without affecting the job itself. This allows DBAs and other operations personnel to monitor jobs from multiple locations. The Data Pump Export and Import utilities can be attached to only one job at a time; however, you can have multiple clients or jobs running at one time. (If you are using the Data Pump API, the restriction on attaching to only one job at a time does not apply.) You can also have multiple clients attached to the same job.

Support for export and import operations over the network, in which the source of each operation is a remote instance.

The ability, in an import job, to change the name of the source datafile to a different name in all DDL statements where the source datafile is referenced.

Enhanced support for remapping tablespaces during an import operation.

Support for filtering the metadata that is exported and imported, based upon objects and object types.

Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs.

The ability to estimate how much space an export job would consume, without actually performing the export.

The ability to specify the version of database objects to be moved. In export jobs, VERSION applies to the version of the database objects to be exported.

In import jobs, VERSION applies only to operations over the network. This means that VERSION applies to the version of database objects to be extracted from the source database.

Most Data Pump export and import operations occur on the Oracle database server. (This contrasts with original export and import, which were primarily client-based.)

Original Export and Import Vs Data Pump Export and Import

If you are familiar with the original Export (exp) and Import (imp) utilities, it is important to understand that many of the concepts behind them do not apply to Data Pump Export (expdp) and Data Pump Import (impdp). In particular:

Data Pump Export and Import operate on a group of files called a dump file set rather than on a single sequential dump file.
.
Data Pump Export and Import access files on the server rather than on the client. This results in improved performance. It also means that directory objects are required when you specify file locations.
.
Data Pump Export and Import use parallel execution rather than a single stream of execution, for improved performance. This means that the order of data within dump file sets is more variable.
.
Data Pump Export and Import represent metadata in the dump file set as XML documents rather than as DDL commands. This provides improved flexibility for transforming the metadata at import time.
.
Data Pump Export and Import are self-tuning utilities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.


At import time there is no option to perform interim commits during the restoration of a partition. This was provided by the COMMIT parameter in original Import.
.
There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.
.
Sequential media, such as tapes and pipes, are not supported.
.
When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.