TalendOpenStudio DI UG 51a en
TalendOpenStudio DI UG 51a en
TalendOpenStudio DI UG 51a en
5.1_a
Copyleft
This documentation is provided under the terms of the Creative Commons Public License (CCPL). For more information about what you can and cannot do with this documentation in accordance with the CCPL, please read: http:// creativecommons.org/licenses/by-nc-sa/2.0/
Notices
All brands, product names, company names, trademarks and service marks are the properties of their respective owners.
Table of Contents
Preface ............................................... vii
1. General information . . . . . . . . . . . . . . . . . . . . . . . . . vii 1.1. Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 1.2. Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 1.3. Typographical conventions . . . . . . . . . . vii 2. History of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 3. Feedback and Support . . . . . . . . . . . . . . . . . . . . . . viii
4.5.7. How to use the Use Output Stream feature . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6. Handling Jobs: miscellaneous subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1. How to share a database connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.2. How to define the Start component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.3. How to handle error icons on components or Jobs . . . . . . . . . . . . . . . . . . . . . 4.6.4. How to add notes to a Job design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.5. How to display the code or the outline of your Job . . . . . . . . . . . . . . . . . . . . . 4.6.6. How to manage the subjob display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.7. How to define options on the Job view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.8. How to find components in Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.9. How to set default values in the schema of an component . . . . . . . . . . .
121 121 121 122 123 125 126 127 129 130 132
7.2.2. Step 2: Connection . . . . . . . . . . . . . . . 7.2.3. Step 3: Table upload . . . . . . . . . . . . . 7.2.4. Step 4: Schema definition . . . . . . . . 7.3. Setting up a JDBC schema . . . . . . . . . . . . . . . 7.3.1. Step 1: General properties . . . . . . . . 7.3.2. Step 2: Connection . . . . . . . . . . . . . . . 7.3.3. Step 3: Table upload . . . . . . . . . . . . . 7.3.4. Step 4: Schema definition . . . . . . . . 7.4. Setting up a SAS connection . . . . . . . . . . . . . 7.4.1. Prerequisites . . . . . . . . . . . . . . . . . . . . . 7.4.2. Step 1: General properties . . . . . . . . 7.4.3. Step 2: Connection . . . . . . . . . . . . . . . 7.5. Setting up a File Delimited schema . . . . . . 7.5.1. Step 1: General properties . . . . . . . . 7.5.2. Step 2: File upload . . . . . . . . . . . . . . . 7.5.3. Step 3: Schema definition . . . . . . . . 7.5.4. Step 4: Final schema . . . . . . . . . . . . . 7.6. Setting up a File Positional schema . . . . . . 7.6.1. Step 1: General properties . . . . . . . . 7.6.2. Step 2: Connection and file upload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.3. Step 3: Schema refining . . . . . . . . . . 7.6.4. Step 4: Finalizing the end schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7. Setting up a File Regex schema . . . . . . . . . . 7.7.1. Step 1: General properties . . . . . . . . 7.7.2. Step 2: File upload . . . . . . . . . . . . . . . 7.7.3. Step 3: Schema definition . . . . . . . . 7.7.4. Step 4: Finalizing the end schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8. Setting up an XML file schema . . . . . . . . . . 7.8.1. Setting up an XML schema for an input file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.2. Setting up an XML schema for an output file . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9. Setting up a File Excel schema . . . . . . . . . . . 7.9.1. Step 1: General properties . . . . . . . . 7.9.2. Step 2: File upload . . . . . . . . . . . . . . . 7.9.3. Step 3: Schema refining . . . . . . . . . . 7.9.4. Step 4: Finalizing the end schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10. Setting up a File LDIF schema . . . . . . . . . 7.10.1. Step 1: General properties . . . . . . 7.10.2. Step 2: File upload . . . . . . . . . . . . . . 7.10.3. Step 3: Schema definition . . . . . . . 7.10.4. Step 4: Finalizing the end schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11. Setting up an LDAP schema . . . . . . . . . . . . 7.11.1. Step 1: General properties . . . . . . 7.11.2. Step 2: Server connection . . . . . . . 7.11.3. Step 3: Authentication and DN fetching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.4. Step 4: Schema definition . . . . . . . 7.11.5. Step 5: Finalizing the end schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12. Setting up a Salesforce connection . . . . . . 7.12.1. Step 1: General properties . . . . . . 7.12.2. Step 2: Connection to a Salesforce account . . . . . . . . . . . . . . . . . . . . . . 7.12.3. Step 3: Retrieving Salesforce modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.4. Step 4: Retrieving Salesforce schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.5. Step 5: Finalizing the end schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13. Setting up a Generic schema . . . . . . . . . . . . 7.13.1. Step 1: General properties . . . . . . 7.13.2. Step 2: Schema definition . . . . . . . 7.14. Setting up an MDM connection . . . . . . . . 7.14.1. Step 1: Setting up the connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
212 213 216 217 217 217 219 219 220 220 220 220 222 222 223 223 225 226 227 227 228 228 229 229 229 230 230 230 231 238 247 248 248 249 250 251 251 251 252 253 253 254 254 255 256 257 257 258 258 259 260 261 263 263 263 264 264
iv
7.14.2. Step 2: Defining MDM schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.15. Setting up a Web Service schema . . . . . . . 7.15.1. Setting up a simple schema . . . . . 7.16. Setting up an FTP connection . . . . . . . . . . 7.16.1. Step 1: General properties . . . . . . 7.16.2. Step 2: Connection . . . . . . . . . . . . . 7.17. Exporting Metadata as context . . . . . . . . . 8.1. What are routines . . . . . . . . . . . . . . . . . . . . . . . . 8.2. Accessing the System Routines . . . . . . . . . . . 8.3. Customizing the system routines . . . . . . . . . 8.4. Managing user routines . . . . . . . . . . . . . . . . . . 8.4.1. How to create user routines . . . . . . 8.4.2. How to edit user routines . . . . . . . . 8.4.3. How to edit user routine libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5. Calling a routine from a Job . . . . . . . . . . . . . 8.6. Use case: Creating a file for the current date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1. What is ELT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2. Introducing Talend SQL templates . . . . . . 9.3. Managing Talend SQL templates . . . . . . . . 9.3.1. Types of system SQL templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2. How to access a system SQL template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3. How to create user-defined SQL templates . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.4. A use case of system SQL templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1. Main window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2. Menu bar and Toolbar . . . . . . . . . . . . . . . . . . . . A.2.1. Menu bar of Talend Open Studio for Data Integration . . . . . . . . . . . . . A.2.2. Toolbar of Talend Open Studio for Data Integration . . . . . . . . . . . . . A.3. Repository tree view . . . . . . . . . . . . . . . . . . . . . . A.4. Design workspace . . . . . . . . . . . . . . . . . . . . . . . . . A.5. Palette . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.6. Configuration tabs . . . . . . . . . . . . . . . . . . . . . . . . . A.7. Outline and code summary panel . . . . . . . . . . A.8. Shortcuts and aliases . . . . . . . . . . . . . . . . . . . . . .
266 280 280 283 283 284 286 288 288 289 290 290 292 292 294 294 298 298 298 299 300 302 303 310 311 311 313 313 315 315 316 318 318
C.3.5. How to calculate the length of a string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 C.3.6. How to delete blank characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 C.4. TalendDataGenerator Routines . . . . . . . . . . . . 344 C.4.1. How to generate fictitious data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 C.5. TalendDate Routines . . . . . . . . . . . . . . . . . . . . . . 345 C.5.1. How to format a Date . . . . . . . . . . . 346 C.5.2. How to check a Date . . . . . . . . . . . . 347 C.5.3. How to compare Dates . . . . . . . . . . 347 C.5.4. How to configure a Date . . . . . . . . . 347 C.5.5. How to parse a Date . . . . . . . . . . . . . 348 C.5.6. How to retrieve part of a Date . . 348 . C.5.7. How to format the Current Date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 C.6. TalendString Routines . . . . . . . . . . . . . . . . . . . . . 349 C.6.1. How to format an XML string. . . 349 C.6.2. How to trim a string . . . . . . . . . . . . . 350 C.6.3. How to remove accents from a string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
343
Preface
1. General information
1.1. Purpose
This User Guide explains how to manage Talend Open Studio for Data Integration functions in a normal operational context. Information presented in this document applies to Talend Open Studio for Data Integration releases beginning with 5.1.x.
1.2. Audience
This guide is for users and administrators of Talend Open Studio for Data Integration. The layout of GUI screens provided in this document may vary slightly from your actual GUI.
2. History of changes
The following table lists changes made in the Talend Open Studio for Data Integration User Guide.
Version v5.0_a
Date 12/12/2011
History of Changes Updates in Talend Open Studio for Data Integration User Guide include: Post-migration restructuring Updated documentation to reflect new product names. For further information on these changes, see Talend's website. Updated chapter: Getting Started with Talend Open Studio for Data Integration Updated chapter: Designing a data integration Job Updated chapter: Mapping data flows Updated chapter: Managing Metadata Updated appendix: Theory into practice: Job examples
v5.0_b
13/02/2012
Updates in Talend Open Studio for Data Integration User Guide include: Added legal notices to the User Guide. Updated the formatting of part of the User Guide.
v5.1_a
03/05/2012
Updates in Talend Open Studio for Data Integration User Guide include: Added descriptions about multiple loop elements and setting the root element as loop element in using tXMLMap. Updated screenshots and descriptions about setting up a Copybook connection.
viii
Data analytics
Talend Open Studio for Data Integration offers nearly comprehensive connectivity to: Packaged applications (ERP, CRM, etc.), databases, mainframes, files, Web Services, and so on to address the growing disparity of sources. Data warehouses, data marts, OLAP applications - for analysis, reporting, dashboarding, scorecarding, and so on. Built-in advanced components for ETL, including string manipulations, Slowly Changing Dimensions, automatic lookup handling, bulk loads support, and so on. Most connectors addressing each of the above needs are detailed in the Talend Open Studio Components Reference Guide. For information about their orchestration in Talend Open Studio for Data Integration, see Chapter 4, Designing a data integration Job. For high-level business-oriented modeling, see Chapter 3, Designing a Business Model.
Data migration/loading and data synchronization/replication are the most common applications of operational data integration, and often require: Complex mappings and transformations with aggregations, calculations, and so on due to variation in data structure, Conflicts of data to be managed and resolved taking into account record update precedence or record owner, Data synchronization in nearly real time as systems involve low latency.
Operational integration
Most connectors addressing each of the above needs are detailed in the Talend Open Studio Components Reference Guide. For information about their orchestration in Talend Open Studio for Data Integration, see Chapter 4, Designing a data integration Job. For high-level business-oriented modeling, see Chapter 3, Designing a Business Model. For information about designing a detailed data integration Job using the output stream feature, see Section B.2, Using the output stream feature.
1.
Unzip the Talend Open Studio for Data Integration zip file and, in the folder, double-click the executable file corresponding to your operating system. The Studio zip archive contains binaries for several platforms including Mac OS X and Linux/Unix.
2.
In the [License] window that appears, read and accept the terms of the end user license agreement to continue. The startup window appears.
This screen appears only when you launch the Talend Open Studio for Data Integration for the first time or if all existing projects have been deleted. 3. Click the Import button to import the selected demo project, or type in a project name in the Create A New Project field and click the Create button to create a new project, or click the Advanced... button to go to the Studio login window. In this procedure, click Advanced... to go to the Studio login widow. For more information about the other two options, see Section 2.4.2, How to import the demo project and Section 2.4.1, How to create a project respectively.
4.
Click... Create...
To... create a new project that will hold all Jobs and Business models designed in the Studio. Talend Open Studio for Data Integration User Guide 7
Click... Import...
To... For more information, see Section 2.4.1, How to create a project. import one or more existing projects. For more information, see Section 2.4.3, How to import projects.
Demo Project...
import the Demo project including numerous samples of ready-to-use Jobs. This Demo project can help you understand the functionalities of different Talend components. For more information, see Section 2.4.2, How to import the demo project.
Open
open the selected existing project. For more information, see Section 2.4.4, How to open a project.
Delete...
open a dialog box in which you can delete any created or imported project that you do not need anymore. For more information, see Section 2.4.5, How to delete a project.
As the purpose of this procedure is to create a new project, click Create... to open the [New project] dialog box. 5. In the dialog box, enter a name for your project and click Finish to close the dialog box. The name of the new project is displayed in the Project list.
6.
Select the project, and click Open. The Connect to TalendForge page appears, inviting you to connect to the Talend Community so that you can check, download, install external components and upload your own components to the Talend Community to share with other Talend users directly in the Exchange view of your Job designer in the Studio. To learn more about the Talend Community, click the read more link. For more information on using and sharing community components, see Section 4.5.3, How to download/upload Talend Community components.
7. 8. 9.
If you want to connect to the Talend Community later, click Skip to continue. If you are working behind a proxy, click Proxy setting and fill in the Proxy Host and Proxy Port fields of the Network setting dialog box. By default, the Studio will automatically collect product usage data and send the data periodically to servers hosted by Talend for product usage analysis and sharing purposes only. If you do not want the Studio to do so, clear the I want to help to improve Talend by sharing anonymous usage statistics check box.
You can also turn on or off usage data collection in the Usage Data Collector preferences settings. For more information, see Section 2.5.15, Usage Data Collector preferences. 10. Fill in the required information, select the I Agree to the TalendForge Terms of Use check box, and click Create Account to create your account and connect to the Talend Community automatically. If you already have created an account at http://www.talendforge.org, click the or connect on existing account link to sign in. Be assured that any personal information you may provide to Talend will never be transmitted to third parties nor used for any purpose other than joining and logging in to the Talend Community and being informed of Talend latest updates.
This page will not appear again at Studio startup once you successfully connect to the Talend Community or if you click Skip too many times. You can show this page again from the [Preferences] dialog box. For more information, see Section 2.5.3, Exchange preferences. A progress information bar and a welcome window display consecutively. From this page you have direct links to user documentation, tutorials, Talend forum, Talend Exchange and Talend latest news. 11. Click Start now! to open Talend Open Studio for Data Integration main window. The main window opens on a welcome page which has useful tips for beginners on how to get started with the Studio. Clicking an underlined link brings you to the corresponding tab view or opens the corresponding dialog box. For more information on how to open a project, see Section 2.4.4, How to open a project.
10
2.
In the dialog box, set the path to the new workspace directory you want to create and then click OK to close the view. On the login window, a message displays prompting you to restart the Studio.
3. 4.
Click Restart to restart the Studio. On the re-initiated login window, set up a project for this new workspace directory. For more information, see Section 2.2.2, How to set up a project.
5.
Select the project from the Project list and click Open to open Talend Open Studio for Data Integration main window.
All business models or Jobs you design in the current instance of the Studio will be stored in the new workspace directory you created. . When you need to connect to any of the workspaces you have created, simply repeat the process described in this section.
For more information, see Section 2.4.2, How to import the demo project. create a local project. When connecting to Talend Open Studio for Data Integration for the first time, there are no default projects listed. You need to create a project and open it in the Studio to store all the Jobs and business models you create in it. When creating a new project, a tree folder is automatically created in the workspace directory on your repository server. This will correspond to the Repository tree view displaying on Talend Open Studio for Data Integration main window. For more information, see Section 2.4.1, How to create a project. import projects you have already created with previous releases of Talend Open Studio for Data Integration into your current Talend Open Studio for Data Integration workspace directory by clicking Import... . For more information, see Section 2.4.3, How to import projects. open a project you created or imported in the Studio. For more information, see Section 2.4.4, How to open a project. delete local projects that you already created or imported and that you do not need any longer. For more information, see Section 2.4.5, How to delete a project. Once you launch Talend Open Studio for Data Integration, you can export the resources of one or more of the created projects in the current instance of the Studio. For more information, see Section 2.4.6, How to export a project.
12
3.
In the Project name field, enter a name for the new project, or change the previously specified project name if needed. This field is mandatory. A message shows at the top of the wizard, according to the location of your pointer, to inform you about the nature of data to be filled in, such as forbidden characters The read-only technical name is used by the application as file name of the actual project file. This name usually corresponds to the project name, upper-cased and concatenated with underscores if needed.
4.
Click Finish. The name of the newly created project is displayed in the Project list in Talend Open Studio for Data Integration login window.
13
To open the newly created project in Talend Open Studio for Data Integration, select it from the Project list and then click Open. A generation engine initialization window displays. Wait till the initialization is complete. Later, if you want to switch between projects, on the Studio menu bar, use the combination File > Switch Project. If you already used Talend Open Studio for Data Integration and want to import projects from a previous release, see Section 2.4.3, How to import projects.
2.
Type in a name for the new project, and click Finish to create the project. A confirmation message is displayed, informing you that the demo project has been successfully imported in the current instance of the Studio.
3.
14
All the samples of the demo project are imported into the newly created project, and the name of the new project is displayed in the Project list on the login screen. To import the demo project TALENDDEMOSJAVA into your repository: 1. Click Advanced..., and then from the login window click Demo Project.... The [Import demo project] dialog box opens.
2.
Select the demo project and then click Finish to close the dialog box. A confirmation message is displayed, informing your that the demo project has been successfully imported in the current instance of the Studio.
3.
Click OK to close the confirmation message. The imported demo project displays in the Project list on the login window.
To open the imported demo project in Talend Open Studio for Data Integration, select it from the Project list and then click Open. A generation engine initialization window displays. Wait till the initialization is complete. The Job samples in the open demo project are automatically imported into your workspace directory and made available in the Repository tree view under the Job Designs folder. You can use these samples to get started with your own Job design.
15
1.
If you are launching Talend Open Studio for Data Integration for the first time, click Advanced... to open to the login window. From the login window, click Import... to open the [Import] wizard.
2.
3. 4. 5.
Click Import several projects if you intend to import more than one project simultaneously. Click Select root directory or Select archive file depending on the source you want to import from. Click Browse... to select the workspace directory/archive file of the specific project folder. By default, the workspace in selection is the current releases one. Browse up to reach the previous release workspace directory or the archive file containing the projects to import. Select the Copy projects into workspace check box to make a copy of the imported project instead of moving it. If you want to remove the original project folders from the Talend Open Studio for Data Integration workspace directory you import from, clear this check box. But we strongly recommend you to keep it selected for backup purposes.
6.
7.
From the Projects list, select the projects to import and click Finish to validate the operation. In the login window, the names of the imported projects now appear on the Project list.
16
You can now select the imported project you want to open in Talend Open Studio for Data Integration and click Open to launch the Studio. A generation initialization window might come up when launching the application. Wait until the initialization is complete.
A progress bar appears, and the Talend Open Studio for Data Integration main window opens. A generation engine initialization dialog bow displays. Wait till initialization is complete. When you open a project imported from a previous version of the Studio, an information window pops up to list a short description of the successful migration tasks. For more information, see Section 2.4.7, Migration tasks.
17
2. 3.
Select the check box(es) of the project(s) you want to delete. Click OK to validate the deletion. The project list on the login window is refreshed accordingly. Be careful, this action is irreversible. When you click OK, there is no way to recuperate the deleted project(s). If you select the Do not delete projects physically check box, you can delete the selected project(s) only from the project list and still have it/them in the workspace directory of Talend Open Studio for Data Integration. Thus, you can recuperate the deleted project(s) any time using the Import existing project(s) as local option on the Project list from the login window.
18
Migration tasks
2.
Select the check boxes of the projects you want to export. You can select only parts of the project through the Filter Types... link, if need be (for advanced users). In the To archive file field, type in the name of or browse to the archive file where you want to export the selected projects. In the Option area, select the compression format and the structure type you prefer. Click Finish to validate the changes.
3.
4. 5.
The archived file that holds the exported projects is created in the defined place.
19
Some changes that affect the usage of Talend Open Studio for Data Integration include, for example: tDBInput used with a MySQL database becomes a specific tDBMysqlInput component the aspect of which is automatically changed in the Job where it is used. tUniqRow used to be based on the Input schema keys, whereas the current tUniqRow allows the user to select the column to base the unicity on.
20
To customize your Java Interpreter path: 1. 2. If needed, click the Talend node in the tree view of the [Preferences] dialog box. Enter a path in the Java interpreter field if the default directory does not display the right path.
On the same view, you can also change the preview limit and the path to the temporary files or the OS language.
2.
Enter the User components folder path or browse to the folder that holds the components to be added to the Talend Open Studio for Data Integration Palette. From the Default mapping links display as list, select the mapping link type you want to use in the tMap.
3.
21
Exchange preferences
4.
Under tRunJob, select the check box if you do not want the corresponding Job to open upon double clicking a tRunJob component. You will still be able to open the corresponding Job by right clicking the tRunJob component and selecting Open tRunJob Component.
5.
Click Apply and then OK to validate the set preferences and close the dialog box. The external components are added to the Palette.
3.
Set the Exchange preferences according to your needs: If you are not yet connected to the Talend Community, click Sign In to go to the Connect to TalendForge page to sign in using your Talend Community credentials or create a Talend Community account and then sign in. If you are already connected to the Talend Community, your account is displayed and the Sign In button becomes Sign Out. To get disconnected from the Talend Community, click Sign Out. By default, while you are connected to the Talend Community, whenever an update to an installed community extension is available, a dialog box appears to notify you about it. If you often check for community extension updates and you do not want that dialog box to appear again, clear the Notify me when updated extensions are available check box.
For more information on connecting to the Talend Community, see Section 2.2, Launching Talend Open Studio for Data Integration. For more information on using community extensions in the Studio, see Section 4.5.3, How to download/upload Talend Community components.
3.
From the Local Language list, select the language you want to use for Talend Open Studio for Data Integration graphical interface. Click Apply and then OK to validate your change and close the [Preferences] dialog box. Restart Talend Open Studio for Data Integration to display the graphical interface in the selected language.
4. 5.
In the Talend client configuration area, you can define the execution options to be used by default:
23
Designer preferences
Stats port range Trace port range Save before run Clear before run Exec time Statistics Traces Pause time
Specify a range for the ports used for generating statistics, in particular, if the ports defined by default are used by other applications. Specify a range for the ports used for generating traces, in particular, if the ports defined by default are used by other applications. Select this check box to save your Job automatically before its execution. Select this check box to delete the results of a previous execution before re-executing the Job. Select this check box to show Job execution duration. Select this check box to show the statistics measurement of data flow during Job execution. Select this check box to show data processing during job execution. Enter the time you want to set before each data line in the traces table.
In the Job Run VM arguments list, you can define the parameter of your current JVM according to your needs. The by-default parameters -Xms256M and -Xmx1024M correspond respectively to the minimal and maximal memory capacities reserved for your Job executions. If you want to use some JVM parameters for only a specific Job execution, for example if you want to display the execution result for this specific Job in Japanese, you need open this Jobs Run view and then in the Run view, configure the advanced execution settings to define the corresponding parameters. For further information about the advanced execution settings of a specific Job, see Section 4.2.7.4, How to set advanced execution settings. For more information about possible parameters, check the site http://www.oracle.com/technetwork/java/javase/ tech/vmoptions-jsp-140102.html.
24
4.
Select the relevant check boxes to customize your use of Talend Open Studio for Data Integration design workspace.
3.
In the Command field, enter your piece/pieces of code before or after %GENERATED_TOS_CALL% to display it/them before or after the code of your Job.
25
Performance preferences
You can improve your performance when you deactivate automatic refresh. 3. Set the performance preferences according to your use of Talend Open Studio for Data Integration:
Select the Deactivate auto detect/update after a modification in the repository check box to deactivate the automatic detection and update of the repository. Select the Check the property fields when generating code check box to activate the audit of the property fields of the component. When one property filed is not correctly filled in, the component is surrounded by red on the design workspace. You can optimize performance if you disable property fields verification of components, i.e. if you clear the Check the property fields when generating code check box. Select the Generate code when opening the job check box to generate code when you open a Job. Select the Check only the last version when updating jobs or joblets check box to only check the latest version when you update a Job. Select the Propagate add/delete variable changes in repository contexts to propagate variable changes in the Repository Contexts. Select the Activate the timeout for database connection check box to establish database connection time out. Then set this time out in the Connection timeout (seconds) field. Select the Add all user routines to job dependencies, when create new job check box to add all user routines to Job dependencies upon the creation of new Jobs. Select the Add all system routines to job dependencies, when create job check box to add all system routines to Job dependencies upon the creation of new Jobs.
26
Documentation preferences
3.
Customize the documentation preferences according to your needs: Select the Source code to HTML generation check box to include the source code in the HTML documentation that you will generate. Select the Use CSS file as a template when export to HTML check box to activate the CSS File field if you need to use a CSS file to customize the exported HTML files.
For more information on documentation, see Section 5.6.1, How to generate HTML documentation and Section 4.2.6.5, Documentation tab.
27
Schema preferences
1. 2.
From the menu bar, click Window > Preferences to open the [Preferences] dialog box. Expand the Talend and Specific Settings nodes in succession and then click Sql Builder to display the relevant view.
3.
Customize the SQL Builder preferences according to your needs: Select the add quotes, when you generated sql statement check box to precede and follow column and table names with inverted commas in your SQL queries. In the AS400 SQL generation area, select the Standard SQL Statement or System SQL Statement check boxes to use standard or system SQL statements respectively when you use an AS400 database. Clear the Enable check queries in the database components (disable to avoid warnings for specific queries) check box to deactivate the verification of queries in all database components.
28
Libraries preferences
3.
Set the parameters according to your needs: In the Default Settings for Fields with Null Values area, fill in the data type and the field length to apply to the null fields. In the Default Settings for All Fields area, fill in the data type and the field length to apply to all fields of the schema. In the Default Length for Data Type area, fill in the field length for each type of data.
29
Type conversion
3.
Set the access path in the External libraries path field through the Browse... button. The default path leads to the library of your current build.
The Metadata Mapping File area lists the XML files that hold the conversion parameters for each database type used in Talend Open Studio for Data Integration. You can import, export, or delete any of the conversion files by clicking Import, Export or Remove respectively. You can modify any of the conversion files according to your needs by clicking the Edit button to open the [Edit mapping file] dialog box and then modify the XML code directly in the open dialog box.
30
By default, Talend Open Studio for Data Integration automatically collects your Studio usage data and sends this data on a regular basis to servers hosted by Talend. You can view the usage data collection and upload information and customize the Usage Data Collector preferences according to your needs. Be assured that only the Studio usage statistics data will be collected and none of your private information will be collected and transmitted to Talend. 1. 2. From the menu bar, click Window > Preferences to display the [Preferences] dialog box. Expand the Talend node and click Usage Data Collector to display the Usage Data Collector view.
3.
Read the message about the Usage Data Collector, and, if you do not want the Usage Data Collector to collect and upload your Studio usage information, clear the Enable capture check box. To have a preview of the usage data captured by the Usage Data Collector, expand the Usage Data Collector node and click Preview.
4.
5.
To customize the usage data upload interval and view the date of the last upload, click Uploading under the Usage Data Collector node.
31
By default, if enabled, the Usage Data Collector collects the product usage data and sends it to Talend servers every 10 days. To change the data upload interval, enter a new integer value (in days) in the Upload Period field. The read-only Last Upload field displays the date and time the usage data was last sent to Talend servers.
2.
In the tree diagram to the left of the dialog box, select the setting you wish to customize and then customize it, using the options that appear to the right of the box.
From the dialog box you can also export or import the full assemblage of settings that define a particular project: To export the settings, click on the Export button. The export will generate an XML file containing all of your project settings. To import settings, click on the Import button and select the XML file containing the parameters of the project which you want to apply to the current project.
32
Palette Settings
In the General view of the [Project Settings] dialog box, you can add a project description, if you did not do so when creating the project. 2. In the tree view of the [Project Settings] dialog box, expand Designer and select Palette Settings. The settings of the current Palette are displayed in the panel to the right of the dialog box. Select one or several components, or even set(s) of components you want to remove from the current projects Palette. Use the left arrow button to move the selection onto the panel on the left. This will remove the selected components from the Palette. To re-display hidden components, select them in the panel on the left and use the right arrow button to restore them to the Palette. Click Apply to validate your changes and OK to close the dialog box.
3.
4.
5.
6.
33
Version management
To get back to the Palette default settings, click Restore Defaults. For more information on the Palette, see Section 4.2.8.1, How to change the Palette layout and settings.
In the tree view of the dialog box, expand General and select Version Management to open the corresponding view.
3.
In the Repository tree view, expand the node holding the items you want to manage their versions and then select the check boxes of these items.
34
Status management
The selected items display in the Items list to the right along with their current version in the Version column and the new version set in the New Version column. 4. Make changes as required: In the Options area, select the Change all items to a fixed version check box to change the version of the selected items to the same fixed version. Click Revert if you want to undo the changes. Click Select all dependencies if you want to update all of the items dependent on the selected items at the same time. Click Select all subjobs if you want to update all of the subjobs dependent on the selected items at the same time. To increment each version of the items, select the Update the version of each item check box and change them manually. Select the Fix tRunjob versions if Latest check box, if you want the father job of current version to keep using the child Job(s) of current version in the tRunjob to be versioned, , regardless of how their versions will update. For example, a tRunjob will update from the current version 1.0 to 1.1 at both father and child levels. Once this check box is selected, the father Job 1.0 will continue to use the child Job 1.0 rather than the latest one as usual, say, version 1.1 when the update is done. To use this check box, the father Job must be using child Job(s) of the latest version as current version in the tRunjob to be versioned, by having selected the Latest option from the drop-down version list in the Component view of the child Job(s). For more infomation on tRunJob, see Talend Open Studio Components Reference Guide. 5. Click Apply to apply your changes and then OK to close the dialog box. For more information on version management, see Section 5.5, Managing Job versions.
In the tree view of the dialog box, expand General and select Status Management to open the corresponding view.
35
Job Settings
3.
In the Repository tree view, expand the node holding the items you want to manage their status and then select the check boxes of these items. The selected items display in the Items list to the right along with their current status in the Status column and the new status set in the New Status column.
4.
In the Options area, select the Change all technical items to a fixed status check box to change the status of the selected items to the same fixed status. Click Revert if you want to undo the changes. To increment each status of the items, select the Update the version of each item check box and change them manually. Click Apply to apply your changes and then OK to close the dialog box. For further information about Job status, see Section 2.6.8, Status settings.
5. 6.
7.
36
To do so: 1. On the toolbar of the Studio main window, click bar to open the [Project Settings] dialog box. 2. 3. or click File > Edit Project Properties from the menu
In the tree view of the dialog box, click the Job Settings node to open the corresponding view. Select the Use project settings when create a new job check boxes of the Implicit Context Load and Stats and Logs areas.
4.
Click Apply to validate your changes and then OK to close the dialog box.
In the tree view of the dialog box, expand the Job Settings node and then click Stats & Logs to display the corresponding view.
37
Context settings
If you know that the preferences for Stats & Logs will not change depending upon the context of execution, then simply set permanent preferences. If you want to apply the Stats & Logs settings individually, then it is better to set these parameters directly onto the Stats & Logs view. For more information about this view, see Section 4.6.7.1, How to automate the use of statistics & logs. 3. 4. Select the Use Statistics, Use Logs and Use Volumetrics check boxes where relevant, to select the type of log information you want to set the path for. Select a format for the storage of the log data: select either the On Files or On Database check box. Or select the On Console check box to display the data in the console.
The relevant fields are enabled or disabled according to these settings. Fill out the File Name between quotes or the DB name where relevant according to the type of log information you selected. You can now store the database connection information in the Repository. Set the Property Type to Repository and browse to retrieve the relevant connection metadata. The fields get automatically completed. Alternatively, if you save your connection information in a Context, you can also access them through Ctrl+Space.
In the tree view of the dialog box, expand the Job Settings node and then select the Implicit Context Load check box to display the configuration parameters of the Implicit tContextLoad feature.
38
3.
Select the From File or From Database check boxes according to the type of file you want to store your contexts in. For files, fill in the file path in the From File field and the field separator in the Field Separator field. For databases, select the Built-in or Repository mode in the Property Type list and fill in the next fields. Fill in the Table Name and Query Condition fields. Select the type of system message you want to have (warning, error, or info) in case a variable is loaded but is not in the context or vice versa. Click Apply to validate your changes and then OK to close the dialog box.
4. 5. 6. 7.
8.
In the tree view of the dialog box, expand the Job Settings node and then click Use Project Settings to display the use of Implicit Context Load and Stats and Logs option in the Jobs.
39
Status settings
3.
In the Implicit Context Load Settings area, select the check boxes corresponding to the Jobs in which you want to use the implicit context load option. In the Stats Logs Settings area, select the check boxes corresponding to the Jobs in which you want to use the stats and logs option. Click Apply to validate your changes and then OK to close the dialog box.
4.
5.
In the tree view of the dialog box, click the Status node to define the main properties of your Repository tree view elements. The main properties of a repository item gathers information data such as Name, Purpose, Description, Author, Version and Status of the selected item. Most properties are free text fields, but the Status field is a drop-down list.
40
Status settings
3.
Click the New... button to display a dialog box and populate the Status list with the most relevant values, according to your needs. Note that the Code cannot be more than 3-character long and the Label is required.
Talend makes a difference between two status types: Technical status and Documentation status. The Technical status list displays classification codes for elements which are to be running on stations, such as Jobs, metadata or routines. The Documentation status list helps classifying the elements of the repository which can be used to document processes (Business Models or documentation). 4. Once you completed the status setting, click OK to save The Status list will offer the status levels you defined here when defining the main properties of your Job designs and business models. 5. In the [Project Settings] dialog box, click Apply to validate your changes and then OK to close the dialog box.
41
Security settings
In the tree view of the dialog box, click the Security node to open the corresponding view. Select the Hide passwords check box to hide your password. If you select the Hide passwords check box, your password will be hidden for all your documentations, contexts, and so on, as well as for your component properties when you select Repository in the Property Type field of the component Basic settings view, i.e. the screen capture below. However, if you select Built-in, the password will not be hidden.
4.
In the [Project Settings] dialog box, click Apply to validate your changes and then OK to close the dialog box.
2.
Select the Filter By Name check box. The corresponding field becomes available.
3.
Follow the rules set below the field when writing the patterns you want to use to filter the Jobs. In this example, we want to list in the tree view all Jobs that start with tMap or test.
4.
In the [Repository Filter] dialog box, click OK to validate your changes and close the dialog box. Only the Jobs that correspond to the filter you set are displayed in the tree view, those that start with tMap and test in this example
43
You can switch back to the by-default tree view, which lists all nodes, Jobs and items, by simply clicking the icon . This will cause the green plus sign appended on the icon to turn to a minus red sign ( ).
44
2.
Clear the All Users check box. The corresponding fields in the table that follows become available.
This table lists the authentication information of all the users who have logged in to Talend Open Studio for Data Integration and created a Job or an item. 3. Clear the check box next to a user if you want to hide all the Jobs/items created by him/her in the Repository tree view. Click OK to validate your changes and close the dialog box. All Jobs/items created by the specified user will disappear from the tree view. You can switch back to the by-default tree view, which lists all nodes, Jobs and items, by simply clicking the icon . This will cause the green plus sign appended on the icon to turn to a minus red sign ( Talend Open Studio for Data Integration User Guide ). 45
4.
2. 3.
In the Filter By Status area, clear the check boxes next to the status type if you want to hide all the Jobs that have the selected status. Click OK to validate your changes and close the dialog box. All Jobs that have the specified status will disappear from the tree view. You can switch back to the by-default tree view, which lists all nodes, Jobs and items, by simply clicking the icon . This will cause the green plus sign appended on the icon to turn to a minus red sign ( ).
46
1. In the Studio, click the icon in the upper right corner of the Repository tree view and select Filter settings from the contextual menu. The [Repository Filter] dialog box displays.
2.
Select the check boxes next to the nodes you want to display in the Repository tree view.
47
Consider, for example, that you want to show in the tree view all the Jobs listed under the Job Designs node, three of the folders listed under the SQL Templates node and one of the metadata items listed under the Metadata node. 3. Click OK to validate your changes and close the dialog box. Only the nodes/folders for which you selected the corresponding check boxes are displayed in the tree view.
If you do not want to show all the Jobs listed under the Job Designs node, you can filter the Jobs using the Filter By Name check box. For more information on filtering Jobs, see Section 2.7.1, How to filter by Job name.
48
50
The Modeler is made of the following panels: Talend Open Studio for Data Integrations design workspace a Palette of shapes and lines specific to the business modeling
51
the Business Model panel showing specific information about all or part of the model.
This Palette offers graphical representations for objects interacting within a Business Model. The objects can be of different types, from strategic system to output document or decision step. Each one having a specific role in your Business Model according to the description, definition and assignment you give to it. All objects are represented in the Palette as shapes, and can be included in the model. Note that you must click the business folder to display the library of shapes on the Palette.
3.3.1. Shapes
Select the shape corresponding to the relevant object you want to include in your Business Model. Double-click it or click the shape in the Palette and drop it in the modeling area. Alternatively, for a quick access to the shape library, keep your cursor still on the modeling area for a couple of seconds to display the quick access toolbar:
52
Connecting shapes
For instance, if your business process includes a decision step, select the diamond shape in the Palette to add this decision step to your model. When you move the pointer over the quick access toolbar, a tooltip helps you to identify the shapes. Then a simple click will do to make it show on the modeling area. The shape is placed in a dotted black frame. Pull the corner dots to resize it as necessary.
Also, a blue-edged input box allows you to add a label to the shape. Give an expressive name in order to be able to identify at a glance the role of this shape in the model. Two arrows below the added shape allow you to create connections with other shapes. You can hence quickly define sequence order or dependencies between shapes. Related topic: Section 3.3.2, Connecting shapes. The available shapes include: Callout Decision Action Terminal Data Document Input List Database Actor Ellipse Gear Details The diamond shape generally represents an if condition in the model. Allows to take context-sensitive actions. The square shape can be used to symbolize actions of any nature, such as transformation, translation or formatting. The rounded corner square can illustrate any type of output terminal. A parallelogram shape symbolize data of any type. Inserts a Document object which can be any type of document and can be used as input or output for the data processed. Inserts an input object allowing the user to type in or manually provide data to be processed. forms a list with the extracted data. The list can be defined to hold a certain nature of data. Inserts a database object which can hold the input or output data to be processed. This schematic character symbolizes players in the decision-support as well technical processes. Inserts an ellipse shape. This gearing piece can be used to illustrate pieces of code programmed manually that should be replaced by a Talend Job for example.
53
Connecting shapes
There are two possible ways to connect shapes in your design workspace:
Either select the relevant Relationship tool in the Palette. Then, in the design workspace, pull a link from one shape to the other to draw a connection between them. Or, you can implement both the relationship and the element to be related to or from, in a few clicks. 1. Simply move the mouse pointer over a shape that you already dropped on your design workspace, in order to display the double connection arrows. Select the relevant arrow to implement the correct directional connection if need be. Drag a link towards an empty area of the design workspace and release to display the connections popup menu. Select the appropriate connection from the list. You can choose among Create Relationship To, Create Directional Relationship To or Create Bidirectional Relationship To. Then, select the appropriate element to connect to, among the items listed.
2. 3.
4.
5.
You can create a connection to an existing element of the model. Select Existing Element in the popup menu and choose the existing element you want to connect to in the displaying list box.
54
The connection is automatically created with the selected shape. The nature of this connection can be defined using Repository elements, and can be formatted and labelled in the Properties panel, see Section 3.3.4, Business Models. When creating a connection, an input box allows you to add a label to the connection you have created. Choose a meaningful name to help you identify the type of relationship you created. You can also add notes and comments to your model to help you identify elements or connections at a later date. Related topic: Section 3.3.3, How to comment and arrange a model.
55
Type in the text in the input box or, if the latter does not show, type in directly on the sticky note.
If you want to link your notes and specific shapes of your model, click the down arrow next to the Note tool on the Palette and select Note attachment. Pull the black arrow towards an empty area of the design workspace, and release. The popup menu offers you to attach a new Note to the selected shape. You can also select the Add Text feature to type in free text directly in the modeling area. You can access this feature in the Note drop-down menu of the Palette or via a shortcut located next to the Add Note feature on the quick access toolbar.
Place your cursor in the design area, right-click to display the menu and select Arrange all. The shapes automatically move around to give the best possible reading of the model. Alternatively, you can select manually the whole model or part of it. To do so, right-click any part of the modeling area, and click Select. You can select: All shapes and connectors of the model, All shapes used in the design workspace, All connectors branching together the shapes. From this menu you can also zoom in and out to part of the model and change the view of the model.
56
Business Models
Click the Rulers & Grid tab to access the ruler and grid setting view.
In the Display area, select the Show Ruler check box to show the Ruler, the Show Grid check box to show the Grid, or both heck boxes. Grid in front sends the grid to the front of the model. In the Measurement area, select the ruling unit among Centimeters, Inches or Pixels.
57
Business Models
In the Grid Line area, click the Color button to set the color of the grid lines and select their style from the Style list. Select the Snap To Grid check box to bring the shapes into line with the grid or the Snap To Shapes check box to bring the shapes into line with the shapes already dropped in the Business Model. You can also click the Restore Defaults button to restore the default settings.
You can also display the assignment list placing the mouse over the shape you assigned information to.
58
You can modify some information or attach a comment. Also, if you update data from the Repository tree view, assignment information gets automatically updated. For further information about how to assign elements to a Business Model, see Section 3.4, Assigning repository elements to a Business Model.
You can define or describe a particular object in your Business Model by simply associating it with various types of information, i.e. by adding metadata items. You can set the nature of the metadata to be assigned or processed, thus facilitating the Job design phase. To assign a metadata item, simply drop it from the Repository tree view to the relevant shape in the design workspace. The Assignment table, located underneath the design workspace, gets automatically updated accordingly with the assigned information of the selected object. The types of items that you can assign are: Element Job designs Metadata Business Models Documentation Details If any Job Designs developed for other projects in the same repository are available, you can reuse them as metadata in the active Business Model. You can assign any descriptive data stored in the repository to any of the objects used in the model. It can be connection information to a database for example. You can use in the active model all other Business Models stored in the repository of the same project. You can assign any type of documentation in any format. It can be a technical documentation, some guidelines in text format or a simple description of your databases. If you have developed some routines in a previous project, to automate tasks for example, you can assign them to your Business Model. Routines are stored in the Code folder of the Repository tree view.
Routines (Code)
59
For more information about the Repository elements, see Chapter 4, Designing a data integration Job.
60
An asterisk displays in front of the Business Model name on the tab to indicate that changes have been made to the model but not yet saved.
To save a Business Model and increment its version at the same time: 1. 2. 3. click File>Save as....The [Save as] dialog box displays. Next to the Version field, click the M button to increment the major version and the m button to increment the minor version. Click Finish to validate the modification By default, when you open a Business Model, you open its last version. Any previous version of the Business Model is read-only and thus cannot be modified. You can access a list of the different versions of a Business Model and perform certain operations. To do that: 1. 2. 3. 4. In the Repository tree view, select the Business Model you want to consult the versions of. Click the Business Models>Version in succession to display the version list of the selected Job. Right-click the Business Model version you want to consult. Do one of the followings: To... edit Business Model properties. Note: The Business Model should not be open on the design workspace, otherwise it will be in read-only mode. Read Business Model consult the Business Model in read-only mode.
You can open and modify the last version of a Business Model from the Version view if you select Edit Business Model from the drop-down list.
61
2.
64
The [New job] wizard opens to help you define the main properties of the new Job.
3.
Description the name of the new Job. A message comes up if you enter prohibited characters. Job purpose or any useful information regarding the Job use. Job description. a read-only field that shows by default the current user login. a read-only field that shows by default the login of the user who owns the lock on the current Job. This field is empty when you are creating a Job and has data only when you are editing the properties of an existing Job. Talend Open Studio for Data Integration User Guide 65
Description a read-only field. You can manually increment the version using the M and m buttons. For more information, see Section 5.5, Managing Job versions. a list to select from the status of the Job you are creating. a list to select from the folder in which the Job will be created.
An empty design workspace opens up showing the name of the Job as a tab label. 4. Drop the components you want to use in your Job design from the Palette onto the design workspace and connect them together. For more information, see Section 4.2.2, How to drop components to the workspace and Section 4.2.4, How to connect components together. Define the properties of each of the components used in the Job. For more information, see Section 4.2.6, How to define component properties. Save your Job and then press F6 to execute it. For more information, see Section 4.2.7, How to run a Job. The Job you created is now listed under the Job Designs node in the Repository tree view. You can open one or more of the created Jobs by simply double-clicking the Job label in the Repository tree view. To create different folders for your Jobs, complete the following: 1. In the Repository tree view, right-click Job Designs and select Create folder from the contextual menu. The [New folder] dialog box displays.
5.
6.
2.
In the Label field, enter a name for the folder and then click Finish to confirm your changes and close the dialog box. The created folder is listed under the Job Designs node in the Repository tree view. If you have already created Jobs that you want to move into this new folder, simply drop them into the folder.
For a scenario showing how to create a real-life data integration Job, see Appendix B, Theory into practice: Job examples.
66
Connect components together in a logical order using the connections offered, in order to build a full Job or subjob. For more information about component connection types, see Section 4.3.1, Connection types. The Job or subjob gets highlighted in one single blue rectangle. For more information about Job and subjob background color, see Section 4.6.6, How to manage the subjob display. Multiple information or warnings may show next to the component. Browse over the component icon to display the information tooltip. This will display until you fully completed your Job design and defined all basic (and sometimes advanced) component properties of the Component view. You will be required to use Java code for your project. Related topics: Section 4.2.4, How to connect components together. Section 4.6.3.1, Warnings and error icons on components. Section 4.2.6, How to define component properties.
67
A dialog box prompts you to select the component you want to use among those offered.
68
3.
Select the component and then click OK. The selected component displays on the design workspace.
Alternatively, according to the type of component (Input or Output) that you want to use, perform one of the following operations: Output: Press Ctrl on your keyboard while you are dropping the component onto the design workspace to directly include it in the active Job. Input: Press Alt on your keyboard while you drop the component onto the design workspace to directly include it in the active Job. If you double-click the component, the Component view shows the selected connection details as well as the selected schema information. If you select the connection without selecting a schema, then the properties will be filled with the first encountered schema.
To search for a component, do the following: 1. 2. Click to clear the search field of any text.
Enter the name of the component you want to look for and click OK. The Palette displays only the family/families that hold(s) the component.
69
The component forming a subjob, as well as the subjobs are connected to each other using various types of connections. Also, a Job (made of one or more subjobs) can be preceded by a pre-job and followed by a post-job components, in order to ensure that some specific tasks (often not related to the actual data processing) are performed first or last in the process. For more information, see Section 4.5.6, How to use the tPrejob and tPostjob components. To connect two components, right-click the source component on your design workspace, select your type of connection from the contextual menu, and click the target component. When dragging the link from your source component towards the target component, a graphical plug indicates if the destination component is valid or not. The black crossed circle disappears only when you reach a valid target component. Only the connections authorized for the selected component are listed on the right-click contextual menu. The types of connections proposed are different for each component according to its nature and role within the Job, i.e. if the connection is meant to transfer data (from a defined schema) or if no data is handled. The types of connections available also depend if data comes from one or multiple input files and gets transferred towards one or multiple outputs. For more information about the various types of connections and their specific settings, see Section 4.3, Using connections.
3.
Drop the component you want to insert in the middle of the row. The link gets bold and then a dialog box displays, prompting you to type in a name for the output link.
70
4.
Type in a name and click OK to close the dialog box. You may be asked to retrieve the schema of the target component. In that case, click OK to accept or click No to deny. The component is inserted in the middle of the link, which is now divided in two links.
Each component has specific basic settings according to its function requirements within the Job. For a detailed description of each component properties and use, see the Talend Open Studio Components Reference Guide. Some components require code to be input or functions to be set. Make sure you use Java code in properties. For File and Database components, you can centralize properties in metadata files located in the Metadata directory of the Repository tree view. This means that on the Basic Settings tab you can set properties on the spot, using the Built-In Property Type or use the properties you stored in the Metadata Manager using the Repository Property Type. The latter option helps you save time. Select Repository as Property Type and choose the metadata file holding the relevant information. Related topic: Section 4.4.1, How to centralize the Metadata items. Alternatively, you can drop the Metadata item from the Repository tree view directly to the component already dropped on the design workspace, for its properties to be filled in automatically. If you selected the Built-in mode and set manually the properties of a component, you can also save those properties as metadata in the Repository. To do so: 1. 2. Click the floppy disk icon. The metadata creation wizard corresponding to the component opens. Follow the steps in the wizard. For more information about the creation of metadata items, see Chapter 7, Managing Metadata. The metadata displays under the Metadata node of the Repository.
3.
For all components that handle a data flow (most components), you can define a Talend schema in order to describe and possibly select the data to be processed. Like the Properties data, this schema is either Built-in or stored remotely in the Repository in a metadata file that you created. A detailed description of the Schema setting is provided in the next sections.
72
In all output properties, you also have to define the schema of the output. To retrieve the schema defined in the input schema, click the Sync columns tab in the Basic settings view. Some extra information is required. For more information about Date pattern for example, check out: http://docs.oracle.com/javase/6/docs/api/index.html.
You can edit a repository schema used in a Job from the Basic settings view. However, note that the schema hence becomes Built-in in the current Job. You cannot change the schema stored in the repository from this window. To edit the schema stored remotely, right-click it under the Metadata node and select the corresponding edit option (Edit connection or Edit file) from the contextual menu.
73
This can be any parameter including: error messages, number of lines processed, or else... The list varies according to the component in selection or the context you are working in. Related topic: Section 4.4.2, How to centralize contexts and variables.
The content of the Advanced settings tab changes according to the selected component. Generally you will find on this tab the parameters that are not required for a basic or usual use of the component but may be required for a use out of the standard scope.
74
To customize these types of parameters, as context variables for example, follow the following steps: 1. 2. 3. 4. 5. Select the relevant component basic settings or advanced settings view that contains the parameter you want to define as a variable. Click the Dynamic settings tab. Click the plus button to display a new parameter line in the table. Click the Name of the parameter displaying to show the list of available parameters. For example: Print operations Then click in the facing Code column cell and set the code to be used. For example: context.verbose if you create the corresponding context variable, called verbose. As code, you can input a context variable or a piece of Java code. The corresponding lists or check boxes thus become unavailable and are highlighted in yellow in the Basic settings or Advanced settings tab.
75
If you want to set a parameter as context variable, make sure you create the corresponding variable in the Context view. For more information regarding the context variable definition, see Section 4.4.2.2, How to use variables in the Contexts view. You can also use a global variable or pieces of Java code to store the values to be used for each parameter.
For example, use some global variable available through the Ctrl+Space bar keys, and adapt it to your context.
You can graphically highlight both Label and Hint text with HTML formatting tags: Bold: <b> YourLabelOrHint </b> Italic: <i> YourLabelOrHint </i>
76
Return carriage: YourLabelOrHint <br> ContdOnNextLine Color: <Font color= #RGBcolor> YourLabelOrHint </Font> To change your preferences of this View panel, click Window>Preferences>Talend>Designer.
In the Documentation tab, you can add your text in the Comment field. Then, select the Show Information check box and an information icon display next to the corresponding component in the design workspace. You can show the Documentation in your hint tooltip using the associated variable _COMMENT_, so that when you place your mouse on this icon, the text written in the Comment field displays in a tooltip box. For advanced use of Documentations, you can use the Documentation view in order to store and reuse any type of documentation.
77
If you have not defined any particular execution context, the context parameter table is empty and the context is the default one. Related topic: Section 4.4.2, How to centralize contexts and variables. 1. 2. Click Run to start the execution. On the same view, the console displays the progress of the execution. The log includes any error message as well as start and end messages. It also shows the Job output in case of a tLogRow component is used in the Job design. To define the lines of the execution progress to be displayed in the console, select the Line limit check box and type in a value in the field. Select the Wrap check box to wrap the text to fit the console width. This check box is selected by default. When it is cleared, a horizontal scrollbar appears, allowing you to view the end of the lines.
3. 4.
Before running again a Job, you might want to remove the execution statistics and traces from the designing workspace. To do so, click the Clear button. If for any reason, you want to stop the Job in progress, simply click the Kill button. You will need to click the Run button again, to start again the Job. Talend Open Studio for Data Integration offers various informative features displayed during execution, such as statistics and traces, facilitating the Job monitoring and debugging work. For more information, see the following sections.
78
To access the Debug mode: 1. 2. Click the Run view to access it. Click the Debug Run tab to access the debug execution modes. In order to be able to run a Job in Debug mode, you need the EPIC module to be installed. Before running your Job in Debug mode, add breakpoints to the major steps of your Job flow.
This will allow you to get the Job to automatically stop at each breakpoint. This way, components and their respective variables can be verified individually and debugged if required. To add breakpoints to a component, right-click it on the design workspace, and select Add breakpoint on the contextual menu. A pause icon displays next to the component where the break is added. To switch to debug mode, click the Java Debug button on the Debug Run tab of the Run panel. Talend Open Studio for Data Integrations main window gets reorganized for debugging. You can then run the Job step by step and check each breakpoint component for the expected behavior and variable values. To switch back to Talend Open Studio for Data Integration designer mode, click Window, then Perspective and select Talend Open Studio for Data Integration.
This feature allows you to monitor all the components of a Job, without switching to the debug mode, hence without requiring advanced Java knowledge. The Traces function displays the content of processed rows in a table. Exception is made for external components which cannot offer this feature if their design does not include it.
79
You can activate or deactivate Traces or decide what processed columns to display in the traces table that displays on the design workspace when launching the current Job. To activate the Traces mode in a Job:
1. 2. 3. 4.
Click the Run view. Click the Debug Run tab to access the debug and traces execution modes. Click the down arrow of the Java Debug button and select the Traces Debug option. An icon displays under every flow of your Job to indicate that process monitoring is activated. Click the Traces Debug to execute the Job in Traces mode.
1. 2.
Right-click the Traces icon under the relevant flow. Select Disable Traces from the list. A red minus sign replaces the green plus sign on the icon to indicate that the Traces mode has been deactivated for this flow.
To choose which columns of the processed data to display in the traces table, do the following: 1. Right-click the Traces icon for the relevant flow, then select Setup Traces from the list. The [Setup Traces] dialog box appears.
2. 3.
In the dialog box, clear the check boxes corresponding to the columns you do not want to display in the Traces table. Click OK to close the dialog box.
80
Monitoring data processing starts when you execute the Job and stops at the end of the execution. To remove the displayed monitoring information, click the Clear button in the Debug Run tab.
It shows the number of rows processed and the processing time in row per second, allowing you to spot straight away any bottleneck in the data processing flow. For trigger links like OnComponentOK, OnComponentError, OnSubjobOK, OnSubjobError and If, the Statistics option displays the state of this trigger during the execution time of your Job: Ok or Error and True or False. Exception is made for external components which cannot offer this feature if their design does not include it. In the Run view, click the Advanced settings tab and select the Statistics check box to activate the Stats feature and clear the box to disable it. The calculation only starts when the Job execution is launched, and stops at the end of it.
81
Click the Clear button from the Basic or Debug Run views to remove the calculated stats displayed. Select the Clear before Run check box to reset the Stats feature before each execution. The statistics thread slows down Job execution as the Job must send these stats data to the design workspace in order to be displayed. You can also save your Job before the execution starts. Select the relevant option check box.
1. 2. 3. 4. 5.
Select the Advanced settings tab. In the JVM settings area of the tab view, select the Use specific JVM arguments checkbox to activate the Argument table. Next to the Argument table, click the New... button to pop up the [Set the VM argument] dialog box. In the dialog box, type in -Dfile.encoding=UTF-8. Click OK to close the dialog box. This argument can be applied for all of your Job executions in Talend Open Studio for Data Integration. For further information about how to apply this JVM argument for all of the Job executions, see Section 2.5.5, Debug and Job execution preferences.
82
If you want the Palette to show permanently, click the left arrow, at the upper right corner of the design workspace, to make it visible at all times. You can also move around the Palette outside the design workspace within Talend Open Studio for Data Integrations main window. To enable the standalone Palette view, click the Window menu > Show View... > General > Palette. If you want to set the Palette apart in a panel, right-click the Palette head bar and select Detached from the contextual menu. The Palette opens in a separate view that you can move around wherever you like within Talend Open Studio for Data Integrations main window.
83
This display/hide option can be very useful when you are in the Favorite view of the Palette. In this view, you usually have a limited number of components that if you display without their families, you will have them in an alphabetical list and thus facilitate their usage. for more information about the Palette favorite, see the section called How to set the Palette favorite.
To add a pin, click the pin icon on the top right-hand corner of the family name.
84
1.
From the Palette, right-click the component you want to add to Palette favorite and select Add To Favorite.
2.
Do the same for all the components you want to add to the Palette favorite then click the Favorite in the upper right corner of the Palette to display the Palette favorite.
button
Only the components added to the favorite are displayed. To delete a component from the Palette favorite, right-click the component you want to remove from the favorite and select Remove From Favorite. To restore the Palette standard view, click the Standard button in the upper right corner of the Palette.
85
To do so, right-click any component family in the Palette and select the desired option in the contextual menu or click Settings to open the [Palette Settings] window and fine-tune the layout.
All you need to do is to click the head border of a panel or to click a tab, hold down the mouse button and drag the panel to the target destination. Release to change the panel position. Click the minimize/maximize icons ( / ) to minimize the corresponding panel or maximize it. For more information on how to display or hide a panel/view, see Section 4.2.8.3, How to display Job configuration tabs/ views.
86
Click the close icon ( ) to close a tab/view. To reopen a view, click Window > Show View > Talend, then click the name of the panel you want to add to your current view or see Section A.8, Shortcuts and aliases . If the Palette does not show or if you want to set it apart in a panel, go to Window > Show view...> General > Palette. The Palette opens in a separate view that you can move around wherever you like within Talend Open Studio for Data Integrations main window.
The Component, Run Job, and Contexts views gather all information relative to the graphical elements selected in the design workspace or the actual execution of the open Job. By default, when you launch Talend Open Studio for Data Integration for the first time, the Problems tab will not be displayed until the first Job is created. After that, Problems tab will be displayed in the tab system automatically. The Modules and Scheduler[deprecated] tabs are located in the same tab system as the Component, Logs and Run Job tabs. Both views are independent from the active or inactive Jobs open on the design workspace. Some of the configuration tabs are hidden by default such as the Error Log, Navigator, Job Hierarchy, Problems, Modules and Scheduler[deprecated] tabs. You can show hidden tabs in this tab system and directly open the corresponding view if you select Window > Show view and then, in the open dialog box, expand the corresponding node and select the element you want to display. For detailed description about these tabs, see Section 4.2.8.3, How to display Job configuration tabs/views.
87
Using connections
Main
This type of row connection is the most commonly used connection. It passes on data flows from one component to the other, iterating on each row and reading input data according to the component properties setting (schema). Data transferred through main rows are characterized by a schema definition which describes the data structure in the input file. You cannot connect two Input components together using a main Row connection. Only one incoming Row connection is possible per component. You will not be able to link twice the same target component using a main Row connection. The second row linking a component will be called Lookup.
To connect two components using a Main connection, right-click the input component and select Row > Main on the connection list.
88
Connection types
Alternatively, you can click the component to highlight it, then right-click it and drag the cursor towards the destination component. This will automatically create a Row > Main type of connection. For information on using multiple Row connections, see the section called Multiple Input/Output.
Lookup
This row link connects a sub-flow component to a main flow component (which should be allowed to receive more than one incoming flow). This connection is used only in the case of multiple input flows.
A Lookup row can be changed into a main row at any time (and reversely, a main row can be changed to a lookup row). To do so, right-click the row to be changed, and on the pop-up menu, click Set this connection as Main. Related topic: the section called Multiple Input/Output.
Filter
This row link connects specifically a tFilterRow component to an output component. This row link gathers the data matching the filtering criteria. This particular component offers also a Reject link to fetch the non-matching data flow.
Rejects
This row link connects a processing component to an output component. This row link gathers the data that does NOT match the filter or are not valid for the expected output. This link allows you to track the data that could not be processed for any reason (wrong type, undefined null value, etc.). On some components, this link is enabled when the Die on error option is deactivated. For more information, refer to the relevant component properties available in the Talend Open Studio for Data Integration Reference Guide.
ErrorReject
This row link connects a tMap component to an output component. This link is enabled when you clear the Die on error check box in the tMap editor and it gathers data that could not be processed (wrong type, undefined null value, unparseable dates, etc.).
89
Connection types
Output
This row link connects a tMap component to one or several output components. As the Job output can be multiple, you get prompted to give a name for each output row created. The system also remembers deleted output link names (and properties if they were defined). This way, you do not have to fill in again property data in case you want to reuse them. Related topic: the section called Multiple Input/Output.
Uniques/Duplicates
These row links connect a tUniqRow to output components. The Uniques link gathers the rows that are found first in the incoming flow. This flow of unique data is directed to the relevant output component or else to another processing subjob. The Duplicates link gathers the possible duplicates of the first encountered rows. This reject flow is directed to the relevant output component, for analysis for example.
Multiple Input/Output
Some components help handle data through multiple inputs and/or multiple outputs. These are often processingtype components such as the tMap. If this requires a join or some transformation in one flow, you want to use the tMap component, which is dedicated to this use. For further information regarding data mapping, see Chapter 4, Designing a data integration Job. For properties regarding the tMap component as well as use case scenarios, see Talend Open Studio Components Reference Guide.
90
Connection types
Trigger connections fall into two categories: subjob triggers: On Subjob Ok, On Subjob Error and Run if, component triggers: On Component Ok, On Component Error and Run if.
OnSubjobOK (previously Then Run): This link is used to trigger the next subjob on the condition that the main subjob completed without error. This connection is to be used only from the start component of the Job. These connections are used to orchestrate the subjobs forming the Job or to easily troubleshoot and handle unexpected errors. OnSubjobError: This link is used to trigger the next subjob in case the first (main) subjob do not complete correctly. This on error subjob helps flagging the bottleneck or handle the error if possible. Related topic: Section 4.6.2, How to define the Start component. OnComponentOK and OnComponentError are component triggers. They can be used with any source component on the subjob. OnComponentOK will only trigger the target component once the execution of the source component is complete without error. Its main use could be to trigger a notification subjob for example. OnComponentError will trigger the sub-job or component as soon as an error is encountered in the primary Job. Run if triggers a subjob or component in case the condition defined is met. For how to set a trigger condition, see the section called Run if connection settings.
91
The Advanced settings vertical tab lets you monitor the data flow over the connection in a Job without using a separate tFlowMeter component. The measured information will be interpreted and displayed in a supervising tool such as Talend Activity Monitoring Console. For information about Talend Activity Monitoring Console, see Talend Activity Monitoring Console User Guide.
92
To monitor the data over the connection, perform the following settings in the Advanced settings vertical tab: 1. 2. Select the Monitor this connection check box. Select Use input connection name as label to use the name of the input flow to label your data to be logged, or enter a label in the Label field. From the Mode list, select Absolute to log the actual number of rows passes over the connection, or Relative to log the ratio (%) of the number of rows passed over this connection against a reference connection. If you select Relative, you need to select a reference connection from the Connections List list. Click the plus button to add a line in the Thresholds table and define a range of the number of rows to be logged.
3.
4.
For more information about flow metrics, see tFlowMeterCatcher component in the Talend Open Studio Components Reference Guide and see Talend Activity Monitoring Console User Guide.
2.
When executing your Job, the number of parallel iterations will be distributed onto the available processors.
93
3.
Select the Statistics check box of the Run view to show the real time parallel executions on the design workspace.
94
From the metadata wizards, you can collect and centralize connection details to various utilities including: DB Connection: connection details and table data description (schema) of any type of database and JDBC connections File Delimited/Positional/Regex/XML/Excel/Ldif: file access details and description of data from the types of file listed. LDAP: access details and data description of an LDAP directory. Salesforce: access details and data description of a Salesforce table. WSDL: access details and data description of a webservice. Generic schema: access details and data description of any sort of sources. For more information about these Metadata creation procedures, see Chapter 4, Designing a data integration Job.
The list grows along with new user-defined variables (context variables).
95
Related topics: Section 4.4.2.4, How to define variables from the Component view Section 4.4.2.2, How to use variables in the Contexts view
Variables tab
The Variables tab is part of the Contexts tab and shows all of the variables that have been defined for each component in the current Job.
From this panel, you can manage your built-in variables: Add a parameter line to the table by clicking on [+] Edit the Name of the new variable and type in the <Newvariable> name. Delete built-in variables. (Reminder: repository variables are read-only.) Import variables from a repository context source, using the Repository variables button. Display the context variables in their original order. They are sorted automatically by the studio upon creation in the tab view or when imported from the Repository. To do this, select the Original order check box.
96
Reorganize the context variables by selecting the variable of interest and then using the To do so, you need select the Original order check box to activate the two arrow buttons. To define the actual value of a newly created variable, click the Value as tree tab. You can add as many entries as you need on the Variables tab. By default the variable created is of built-in type. Fields Name Source Description Name of the variable. You can edit this field, on the condition that the variable is of Built-in type. Repository variables are read-only. Built-in: The variable is created in this Job and will be used in this Job only. <Repository entry name>: The variable has been defined in a context stored in the repository. The source is thus the actual context group you created in the repository. Type Script code Select the type of data being handled. This is required in Java. Code corresponding to the variable value. Displayed code will be: context.YourParameterName. This Script code is automatically generated when you define the variable in the Component view. Add any useful comment. and buttons.
Comment
You cannot create contexts from the Variables view, but only from the Values as table or as tree views. For further information regarding variable definition on the component view, see Section 4.4.2.4, How to define variables from the Component view. For more information about the repository variables, see Section 4.4.2.5, How to store contexts in the repository.
From this view, you can: Define the value of a built-in variable directly in the Value field. Note that repository variables values are readonly and can only be edited in the relevant repository context.
97
Define a question to prompt the user for variable value confirmation at execution time. Create or Edit a context name through the top right dedicated button. Rearrange the variable/context groupby display. Fields Variable Context Prompt Description Name of the variables. Name of the contexts. Select this check box, if you want the variable to be editable in the confirmation dialog box at execution time. If you asked for a prompt to popup, fill in this field to define the message to show on the dialog box. Value Value for the corresponding variable. Define the value of your built-in variables. Note that repository variables are read-only.
You can manage your contexts from this tab, through the dedicated button placed on the top right hand side of the Contexts view. See Section 4.4.2.3, How to configure contexts for further information regarding the context management. On the Values as tree tab, you can display the values based on the contexts or on the variables for more clarity. To change the way the values are displayed on the tree, click the small down arrow button, then click the group by option you want. For more information regarding variable definition, see Section 4.4.2.4, How to define variables from the Component view and Section 4.4.2.5, How to store contexts in the repository.
You can manage your contexts from this tab, through the Configure contexts button placed on the top right hand side of the Contexts panel. See Section 4.4.2.3, How to configure contexts for further information regarding the context management. For more information regarding variable definition, see Section 4.4.2.4, How to define variables from the Component view and Section 4.4.2.5, How to store contexts in the repository.
98
The default context cannot be removed, therefore the Remove button is unavailable. To make it editable, select another context on the list.
Creating a context
Based on the default context you set, you can create as many contexts as you need. To create a new context: 1. 2. Click New in the [Configure Contexts] dialog box. Type in a name for the new context.
3.
When you create a new context, the entire default context legacy is copied over to the new context. You hence only need to edit the relevant fields on the Value as tree tab to customize the context according to your use. The drop-down list Default Context shows all the contexts you created. You can switch default context by simply selecting the new default context on the Default Context list on the Variables tab of the Contexts view. Note that the Default (or last) context can never be removed. There should always be a context to run the Job, be this context called Default or any other name.
99
To carry out changes on the actual values of the context variables, go to the Values as tree or Values as table tabs. For more information about these tabs, see Section 4.4.2.2, How to use variables in the Contexts view.
100
3. 4.
Give a Name to this new variable, fill in the Comment area and choose the Type. Enter a Prompt to be displayed to confirm the use of this variable in the current Job execution (generally used for test purpose only). And select the Prompt for value check box to display the field as editable value. If you filled in a value already in the corresponding properties field, this value is displayed in the Default value field. Else, type in the default value you want to use for one context. Click Finish to validate. Go to the Contexts view tab. Notice that the context variables tab lists the newly created variables. The newly created variables are listed in the Contexts view. The variable name should follow some typing rules and should not contain any forbidden characters, such as space character.
5.
6. 7.
The variable created this way is automatically stored in all existing contexts, but you can subsequently change the value independently in each context. For more information on how to create or edit a context, see Section 4.4.2.3, How to configure contexts.
StoreSQLQuery
StoreSQLQuery is a user-defined variable and is mainly dedicated to debugging. StoreSQLQuery is different from other context variables in the fact that its main purpose is to be used as parameter of the specific global variable called Query. It allows you to dynamically feed the global query variable. The global variable Query, is available on the proposals list (Ctrl+Space bar) for some DB input components. For further details on StoreSQLQuery settings, see the Talend Open Studio Components Reference Guide, and in particular the scenarios of the tDBInput component.
Procedure 4.1. Create the context group and add required information
1. Right-click the Contexts node in the Repository tree view and select Create new context group from the contextual menu.
101
A 2-step wizard appears to help you define the various contexts and context parameters, which you will be able to select in the Contexts view of the design workspace. 2. In Step 1 of 2, type in a name for the context group to be created, and add any general information such as a description if required. Click Next to go to Step 2 of 2, which allows you to define the various contexts and variables that you need.
3.
Procedure 4.2. Define the default contexts variable set to be used as basis for other contexts
1. On the Variables tab, click the [+] button to add as many new variable lines as needed and define the name of the variables. In this example, we define the variables that can be used in the Name field of the Component view. 2. Select the type of the variable from the Type list. The Script code varies according to the type of variable you selected, and will be used in the generated code. The screen shot above shows the Java code produced. 3. On the Tree or Table views, define the various contexts and the values of the variables.
102
First, define the values for the default (first) context variables, then create a new context that will be based on the variables values that you just set. For more information about how to create a new context, see Section 4.4.2.3, How to configure contexts. 4. On the Values as tree tab, add a prompt if you want the variable to be editable in a confirmation dialog box at execution time.
To add a prompt message, select the facing check box, then type in the message you want to display at execution time. Once you created and adapted as many context sets as you want, click Finish to validate. The group of contexts thus displays under the Contexts node in the Repository tree view.
103
104
4.
In the wizard, select the context variables you need to apply or clear those you do not need to. The context variables that have been applied are automatically selected and cannot be cleared.
5.
Click the Run Job tab, and in the Context area, select the relevant context among the various ones you created. If you did not create any context, only the Default context shows on the list. All the context variables you created for the selected context display, along with their respective value, in a table underneath. If you clear the Prompt check box next to some variables, you will get a dialog box allowing you to change the variable value for this Job execution only. To make a change permanent in a variable value, you need to change it on the Context view if your variable is of type built-in or in the Context group of the repository. Related topics: Section 4.4.2.2, How to use variables in the Contexts view Section 4.4.2.5, How to store contexts in the repository
105
In each of the above categories, you can create your own user-defined SQL templates using the SQL templates wizard and thus centralize them in the repository for reuse. For more information about the use of SQL templates in Talend Open Studio for Data Integration, see Chapter 4, Designing a data integration Job. For more information about how to create a user-defined SQL template and use it in a Job context, see the scenario of tMysqlTableList component in the Talend Open Studio Components Reference Guide.
106
The [SQL Builder] editor is made of the following panels: Database structure, Query editor made of editor and designer tabs, Query execution view, Schema view. The Database structure shows the tables for which a schema was defined either in the repository database entry or in your built-in connection. The schema view, in the bottom right corner of the editor, shows the column description.
107
The Diff icons point out that the table contains differences or gaps. Expand the table node to show the exact column containing the differences. The red highlight shows that the content of the column contains differences or that the column is missing from the actual database table. The blue highlight shows that the column is missing from the table stored in Repository > Metadata.
108
Alternatively, the graphical query Designer allows you to handle tables easily and have real-time generation of the corresponding query in the Edit tab. 3. Click the Designer tab to switch from the manual Edit mode to the graphical mode. You may get a message while switching from one view to the other as some SQL statements cannot be interpreted graphically. 4. If you selected a table, all columns are selected by default. Clear the check box facing the relevant columns to exclude them from the selection. Add more tables in a simple right-click. On the Designer view, right-click and select Add tables in the popup list then select the relevant table to be added. If joins between these tables already exist, these joins are automatically set up graphically in the editor. You can also create a join between tables very easily. Right-click the first table columns to be linked and select Equal on the pop-up list, to join it with the relevant field of the second table.
5.
109
The SQL statement corresponding to your graphical handlings is also displayed on the viewer part of the editor or click the Edit tab to switch back to the manual Edit mode. In the Designer view, you cannot include graphically filter criteria. You need to add these in the Edit view. 6. Once your query is complete, execute it by clicking the icon on the toolbar.
The toolbar of the query editor allows you to access quickly usual commands such as: execute, open, save and clear. The results of the active query are displayed on the Results view in the lower left corner.
110
Talend Open Studio for Data Integration, you can also upload components you have created to Talend Exchange to share with other community users. A click on the Exchange link on the toolbar of Talend Open Studio for Data Integration opens the Exchange tab view on the design workspace, where you can find lists of: components available in Talend Exchange for you to download and install, components you downloaded and installed in previous versions of Talend Open Studio for Data Integration but not installed yet in your current Studio, and components you have created and uploaded to Talend Exchange to share with other Talend Community users. Before you can download community components or upload your own components to the community, you need to sign in to Talend Exchange from your Studio first. If you did not sign in to Talend Exchange when launching the Studio, you still have a chance to sign in from the Talend Exchange preferences settings page. For more information, see Section 2.5.3, Exchange preferences. The community components available for download are not validated by Talend. This explains why you may encounter component loading errors sometimes when trying to install certain community components, why an installed community component may have a different name in the Palette than in the Exchange tab view, and why you may not be able to find a component in the Palette after it is seemingly installed successfully.
2.
In the Available Extensions view, if needed, enter a full component name or part of it in the text field and click the fresh button to find quickly the component you are interested in. Click the view/download link for the component of interest to display the component download page.
3.
111
4.
View the information about the component, including component description and review comments from community users, or write your own review comments and/or rate the component if you want. For more information on reviewing and rating a community component, see Section 4.5.3.3, How to review and rate a community component. If needed, click the left arrow button to return to the component list page.
5.
Click the Install button in the right part of the component download page to start the download and installation process. A progress indicator appears to show the completion percentage of the download and installation process. Upon successful installation of the component, the Downloaded Extensions view opens and displays the status of the component, which is Installed.
112
2.
3.
Fill in the required information, including a title and a review comment, click one of the five stars to rate the component, and click Submit Review to submit you review to the Talend Exchange server. Upon validation by the Talend Exchange moderator, your review is published on Talend Exchange and displayed in the User Review area of the component download page.
2.
Click the Add New Extension link in the upper right part of the view to open the component upload page.
114
3.
Complete the required information, including the component title, initial version, Studio compatibility information, and component description, fill in or browse to the path to the source package in the File field, and click the Upload Extension button. Upon successful upload, the component is listed in the My Extensions view, where you can update, modify and delete any component you have uploaded to Talend Exchange.
115
2.
Fill in the initial version and Studio compatibility information, fill in or browse to the path to the source package in the File field, and click the Update Extension button. Upon successful upload of the updated component, the component is replaced with the new version on Talend Exchange and the My Extension view displays the component's new version and update date.
To modify the information of a component uploaded to Talend Exchange, complete the following: 1. From the My Extensions view, click the icon in the Operation column for the component your want to modify information for to open the component information editing page.
2.
Complete the Studio compatibility information and component description, and click the Modify Extension button to update the component information to Talend Exchange.
116
To delete a component you have uploaded to Talend Exchange, click icon for the component from the My Extensions view. The component is then removed from Talend Exchange and is no longer displayed on the component list in the My Extensions view.
The table below describes the information presented in the Modules view. Column Status Description points out if a module is installed or not installed on your system. The icon indicates that the module is not necessarily required for the
corresponding component listed in the Context column. The icon indicates that the module is absolutely required for the corresponding component. Context lists the name of Talend component using the module. If this column is empty, the module is then required for the general use of Talend Open Studio for Data Integration. This column lists any external libraries added to the routines you create and save in the Studio library folder. For more information, see Section 8.4.3, How to edit user routine libraries. Module lists the module exact name.
117
Description explains why the module/library is required. the selected check box indicates that the module is required.
To install any missing module, complete the following: 1. In the Modules view, click the icon in the upper right corner of the view.
The [Open] dialog box of your operating system appears. 2. Browse to the module you want to install, select it and then click Open on the dialog box. The dialog box closes and the selected module is installed in the library folder of the current Studio. You can now use the component dependent on this module in any of your Job designs.
This view is empty if you have not scheduled any task to run a Job. Otherwise, it lists the parameters of all the scheduled tasks. The procedure below explains how to schedule a task in the Scheduler view to run a specific Job periodically and then generate the crontab file that will hold all the data required to launch the selected Job. It also points out how to use the generated file with the crontab command in Unix or a task scheduling program in Windows. 1. Click the icon in the upper right corner of the Scheduler view.
118
2. 3. 4. 5.
From the Project list, select the project that holds the Job you want to launch periodically. Click the three-dot button next to the Job field and select the Job you want to launch periodically. From the Context list, if more than one exists, select the desired context in which to run the Job. Set the time and date details necessary to schedule the task. The command that will be used to launch the selected Job is generated automatically and attached to the defined task.
6.
Click Add this entry to validate your task and close the dialog box. The parameters of the scheduled task are listed in the Scheduler view.
7.
Click the icon in the upper right corner of the Scheduler view to generate a crontab file that will hold all the data required to start the selected Job. The [Save as] dialog box displays.
8.
Browse to set the path to the crontab file you are generating, enter a name for the crontab file in the File name field, and then click Save to close the dialog box. The crontab file corresponding to the selected task is generated and stored locally in the defined path.
9.
In Unix, paste the content of the crontab file into the crontab configuration of your Unix system; in Windows, install a task scheduling program that will use the generated crontab file to launch the selected Job. icon to delete any of the listed tasks and the icon to edit the parameters of any of the
119
Tasks that require a tPrejob component to be used include for example: loading context information required for the subjob execution, opening a database connection, making sure that a file exists. Many more tasks that are collateral to your Job and might be damaging the overall readability of your Job may as well need a prejob component. Tasks that require a tPostjob component to be used include, for example: clearing a folder or deleting a file, any tasks to be carried out even though the preceding subjob(s) failed.
120
The Use Output Stream feature can be found in the Basic settings view of a number of components such as tFileOutputDelimited. To use this feature, select Use Output Stream check box in the Basic settings view of a component that has this feature. In the Output Stream field that is thus enabled, define your output stream using a command. Prior to use the output stream feature, you have to open a stream. For a detailed example of the illustration of this prerequisite and the usage of the Use Output Stream feature, see Section B.2, Using the output stream feature. For an example of Job using this feature, see the second scenario of tFileOutputDelimited in the Talend Open Studio Components Reference Guide.
121
3. 4.
On the Connection Component view, tick the box Use or Register a shared connection. Give a name to the connection you want to share, in the Shared DB Connection Name field.
You are now able to re-use the connection in your Child Job (and any other Job that requires a connection to the same database). 5. Simply follow the same steps again and make sure you use the same name in the Shared DB Connection Name field.
For more information about how to use the Connection components, see Talend Open Studio Components Reference Guide.
122
To distinguish which component is to be the Start component of your Job, identify the main flow and the secondary flows of your Job. The main flow should be the one connecting a component to the next component using a Row type link. The Start component is then automatically set on the first component of the main flow (icon with green background). The secondary flows are also connected using a Row-type link which is then called Lookup row on the design workspace to distinguish it from the main flow. This Lookup flow is used to enrich the main flow with more data. Be aware that you can change the Start component hence the main flow by changing a main Row into a Lookup Row, simply through a right-click the row to be changed. Related topics: Section 4.2.4, How to connect components together Section 5.1, Activating/Deactivating a Job or a sub-job
123
When the tooltip messages of a component indicate that a module is required, you must install this module for this component using the Module view. This view is hiden by default. For further information about how to install external modules using this view, see Section 4.5.4, How to install external modules.
The error icon displays as well on the tab next to the Job name when you open the Job on the design workspace. The compilation or code generation does only take place when carrying out one of the following operations: opening a Job, clicking on the Code Viewer tab, executing a Job (clicking on Run Job), saving the Job. Hence, the red error icon will only show then. When you execute the Job, a warning dialog box opens to list the source and description of any error in the current Job.
124
Click Cancel to stop your Job execution or click Continue to continue it. For information on errors on components, see Section 4.6.3.1, Warnings and error icons on components.
You can change the note format. To do so, select the note you want to format and click the Basic setting tab of the Component view.
Select the Opacity check box to display the background color. By default, this box is selected when you drop a note on the design workspace. If you clear this box, the background becomes transparent. You can select options from the Fonts and Colors list to change the font style, size, color, and so on as well as the background and border color of your note. You can select the Adjust horizontal and Adjust vertical boxes to define the vertical and horizontal alignment of the text of your note. The content of the Text field is the text displayed on your note.
125
4.6.5.1. Outline
The Outline tab offers a quick view of the business model or the open Job on the design workspace and also a tree view of all used elements in the Job or Business Model. As the design workspace, like any other window area can be resized upon your needs, the Outline view is convenient to check out where about on your design workspace, you are located.
This graphical representation of the diagram highlights in a blue rectangle the diagram part showing in the design workspace. Click the blue-highlighted view and hold down the mouse button. Then, move the rectangle over the Job. The view in the design workspace moves accordingly. The Outline view can also be displaying a folder tree view of components in use in the current diagram. Expand the node of a component, to show the list of variables available for this component. To switch from the graphical outline view to the tree view, click either icon docked at the top right of the panel.
126
This blue highlight helps you easily distinguish one subjob from another. A Job can be made of one single subjob. An orange square shows the prejob and postjob parts which are different types of subjobs. For more information about prejob and postjob, see Section 4.5.6, How to use the tPrejob and tPostjob components.
In the Basic setting view, select the Show subjob title check box if you want to add a title to your subjob, then fill in a title. To modify the title color and the subjob color: 1. In the Basic settings view, click the Title color/Subjob color button to display the [Colors] dialog box.
127
2.
Set your colors as desired. By default, the title color is blue and the subjob color is transparent blue.
Click the minus sign (-) to collapse the subjob. When reduced, only the first component of the subjob is displayed. Click the plus sign (+) to restore your subjob.
To remove the background color of a specific subjob, right-click the subjob and select the Hide subjob option on the pop-up menu.
128
129
3. 4.
Set the relevant details depending on the output you prefer (console, file or database). Select the relevant Catch check box according to your needs. You can save the settings into your Project Settings by clicking the button. This way, you can access such settings via File > Edit project settings > Job settings > Stats & Logs or via the button on the toolbar.
When you use Stats & Logs functions in your Job, you can apply them to all its subjobs.
To do so, click the Apply to subjobs button in the Stats & Logs panel of the Job view and the selected stats & logs functions of the main Job will be selected for all of its subjobs.
From the Palette, you can search for all the Jobs that use the selected component. To do so: 1. In the Palette, right-click the component you want to look for and select Find Component in Jobs.
A progress indicator displays to show the percentage of the search operation that has been completed then the [Find a Job] dialog box displays listing all the Jobs that use the selected component.
2.
From the list of Jobs, click the desired Job and then click OK to open it on the design workspace.
131
In this example, the metadata for the input component is stored in the Repository. For information about metadata creation in the Repository, see Section 4.4.1, How to centralize the Metadata items. 2. 3. Click the [...] button next to Edit schema, and select the Change to built-in property option from the popup dialog box to open the schema editor. Enter Talend between quotation marks in the Default field for the company column, enter Paris between quotation marks in the Default field for the city column, and click OK to close the schema editor.
132
4.
Configure the output component tLogRow to display the execution result the way you want, and then run the Job.
In the output data flow, the missing information is completed according to the set default values.
133
Alternatively, right-click the component and select the relevant Activate/Deactivate command according to the current component status. If you disable a component, no code will be generated, you will not be able to add or modify links from the disabled component to active or new components. Related topic: Section 4.6.2, How to define the Start component.
136
In the dialog box that appears, select the root directory or the archive file to import the items from. If the items to import are still stored on a local repository, use the Select root directory option and browse to the relevant project directory on your system, and then proceed to the next step. If you exported the items from your local repository into an archive file (including source files and scripts), use the Select archive file option, browse to the file and then click Open to go to Step 6.
3.
Browse down to the relevant Project folder within the workspace directory. It should correspond to the project name you picked up. Talend Open Studio for Data Integration User Guide 137
4.
If you only want to import very specific items such as some Job Designs, you can select the specific folder, such as Process where all the Job Designs for the project are stored. If you only have Business Models to import, select the specific folder: BusinessProcess. But if your project gather various types of items (Business Models, Jobs Designs, Metadata, Routines...), we recommend you to select the Project folder to import all items in one go, and click OK to continue.
5.
138
6. 7. 8.
Select the overwirte existing items check box if you want to overwrite existing items with those having the same names to be imported. This will refresh the Items List. From the Items List whih displays all valid items that can be imported, select the items that you want to import by selecting the corresponding check boxes. Click Finish to validate the import. The imported items are displayed in the repository in the relevant folder respective to their nature. If there are several versions of the same items, they will all be imported into the Project you are running, unless you already have identical items.
139
To export Jobs, complete the following: 1. In the Repository tree view, right-click the Job you want to export, and select Export Job to open the [Export Jobs] dialog box. You can show/hide a tree view of all created Jobs in Talend Open Studio for Data Integration directly from the [Export Jobs] dialog box by clicking the and the buttons respectively. The Jobs you earlier selected in the Studio tree view display with selected check boxes. This accessibility helps to modify the selected items to be exported directly from the dialog box without having to close it and go back to the Repository tree view in Talend Open Studio for Data Integration to do that.
2. 3. 4. 5. 6.
On the To archive file field, browse to the directory where you want to save your exported Job. On the Job Version area, select the version number of the Job you want to export if you have created more than one version of the Job. Select the Export Type in the list between Autonomous Job, Axis Webservice (WAR), Axis Webservice (Zip), JBoss ESB, Petals ESB . Select the Extract the zip file check box to automatically extract the archive file in the target directory. In the Options area, select the file type(s) you want to add to the archive file. The check boxes corresponding to the file types necessary for the execution of the Job are selected by default. You can clear these check boxes depending on what you want to export. Option Shell launcher Description Select this check box to export the .bat and/or .sh files necessary to launch the exported Job. All: exports the .bat and .sh files. Unix exports the .sh file. Windows exports the .bat file. System routines Select this check box to export system routines.
140
Description Select this check box to export user routines. Select this check bo to export export the .java file holding Java classes generated by the Job when designing it. Select this check bo to export the sources used by the Job during its execution including the .item and .properties files, Java and Talend sources. If you select the Source files check box, you can reuse the exported Job in a Talend Open Studio for Data Integration installed on another machine. These source files are only used in Talend Open Studio for Data Integration.
Required Talend modules Select this check box to export export Talend modules.
Select this check box if you want to export the dependencies of your Job, i.e. contexts, routines, connections, etc. Select this check box to export ALL context parameters files and not just those you select in the corresponding list. To export only one context, select the context that fits your needs from the Context script list, including the .bat or .sh files holding the appropriate context parameters. Then you can, if you wish, edit the .bat and .sh files to manually modify the context type.
Apply to children
Select this check box if you want to apply the context selected from the list to all child Jobs.
7.
Click the Override parameters values button, if necessary. In the window which opens you can update, add or remove context parameters and values of the Job context you selected in the list.
8.
Click Finish to validate your changes, complete the export operation and close the dialog box.
A zipped file for the Jobs is created in the defined place. If the Job to be exported calls a user routine that contains one or more extra Java classes in parallel with the public class named the same as the user routine, the extra class or classes will not be included in the exported file. To export such classes, you need to include them within the class with the routine name as inner classes. For more information about user routines, see Section 8.4, Managing user routines. For more information about classes and inner classes, see relevant Java manuals.
141
Select the type of archive you want to use in your Web application. Archive type WAR Description The options are read-only. Indeed, the WAR archive generated includes all configuration files necessary for the execution or deployment from the Web application. All options are available. In the case the files of your Web application config are all set, you have the possibility to only set the Context parameters if relevant and export only the Classes into the archive.
ZIP
Once the archive is produced, place the WAR or the relevant Class from the ZIP (or unzipped files) into the relevant location, of your Web application server. The URL to be used to deploy the Job, typically reads as follow: http://localhost:8080/Webappname/services/JobName?method=runJob&args=null where the parameters stand as follow: URL parameters http://localhost:8080/ /Webappname/ /services/ /JobName ?method=runJob&args=null Description Type in the Webapp host and port. Type in the actual name of your web application. Type in services as the standard call term for web services. Type in the exact name of the Job you want to execute. The method is RunJob to execute the Job.
The call return from the Web application is 0 when there is no error and different from 0 in case of error. For a real-life example of creating and exporting a Job as a Webservice and calling the exported Job from a browser, see Section 5.2.2.3, An example of exporting a Job as a Web service. The tBufferOutput component was especially designed for this type of deployment. For more information regarding this component, see Talend Open Studio Components Reference Guide.
142
2.
3.
In the design workspace, select tFixedFlowInput, and click the Component tab to define the basic settings for tFixedFlowInput. Set the Schema to Built-In and click the [...] button next to Edit Schema to describe the data structure you want to create from internal variables. In this scenario, the schema is made of three columns, now, firstname, and lastname.
4.
5.
Click the [+] button to add the three parameter lines and define your variables, and then click OK to close the dialog box and accept propagating the changes when prompted by the system. The three defined columns display in the Values table of the Basic settings view of tFixedFlowInput.
6.
In the Value cell of each of the three defined columns, press Ctrl+Space to access the global variable list, and select TalendDate.getCurrentDate(), talendDatagenerator.getFirstName, and talendDataGenerator.getLastName for the now, firstname, and lastname columns respectively. In the Number of rows field, enter the number of lines to be generated.
7.
143
8.
In the design workspace, select tFileOutputDelimited, click the Component tab for tFileOutputDelimited, and browse to the output file to set its path in the File name field. Define other properties as needed.
If you press F6 to execute the Job, three rows holding the current date and first and last names will be written to the set output file.
144
2. 3. 4.
Click the Browse... button to select a directory to archive your Job in. In the Job Version area, select the version of the Job you want to export as a web service. In the Export type area, select the export type you want to use in your Web application (WAR in this example) and click Finish. The [Export Jobs] dialog box disappears. Copy the War folder and paste it in the Tomcat webapp directory.
5.
2.
145
The return code from the Web application is 0 when there is no error and 1 if an error occurs. For a real-life example of creating and exporting a Job as a Webservices using the tBufferOutput component, see the tBufferOutput component in Talend Open Studio Components Reference Guide.
4.
146
8. 9.
In the Category field, type in the category of the service on which the Job will be deployed. In the Message Queue Name field, type in the name of the queue that is used to deploy the Job.
10. Click the Browse... button next to the To archive file field and browse to set the path to the archive file in which you want to export the Job. Then click Finish. The dialog box closes. A progress indicator displays to show the progress percentage of the export operation. The Job is exported in the selected archive. When you copy the ESB archive in the deployment directory and launch the server, the Job is automatically deployed and will be ready to be executed on the ESB server.
147
To export a Job as a petals ESB archive, complete the following: 1. In the Repository tree view, right-click the Job you want to export and then select Export Job from the contextual menu. The [Export Jobs] dialog box displays.
2. 3. 4.
In the To archive file field, browse to set the path to the archive file in which you want to export the Job. From the Select the job version list, select the Job version you want to export. From the Select export type list in the Export type area, select Petals ESB. The three following options in the Options area are selected by default: Singleton job, User Routines and Source file. You can select any of the other options as needed. The table below explains the export options: Option Singleton job Description Exports the Job as singleton: A singleton Job can have only one instance running at a time on a given Talend Service Engine in petals ESB.
Generate the end-point Generates the end-point at deployment time. If this option is not selected, the endpoint name is the Job name with the suffix Endpoint.
148
Description Petals Validates all the messages / requests against the WSDL. Selecting this option reduces system performance (disk access).
Embeds the user routines in the service-unit. Embeds the source files in the generated service-unit. A list from which to select the context that will be used by default by the Job.
In the [Export Jobs] dialog box, click the Edit the exposed contexts link to open the [Context Export] dialog box.
This dialog box will display a list of all the context variables that are used in the exported Job. Here you can specify how each context variable should be exported in the generated WSDL file. 6. Click in the Export Mode field and select from the list the context export mode for each of the listed context variables. The table below explains the export mode options: Export Mode Not exported Parameter In-Attachment Out-Attachment Description The context is not exported (not visible as a parameter). But the context can still be overridden using the native parameters (options) of the Job. The context is exported as a parameter in the WSDL. The context will pass the path of a temporary file which content is attached in the input message. The context will be read after the Job execution. -This context must point to a file, -The file content will be read by the service engine and attached to the response, -The context name will be used as the attachment name, -The file will be deleted by the service engine right after its content was loaded. Parameter and Attachment Out- A mix between the Parameter and the Out-Attachment modes. -The context is exposed as a parameter, -It will also be read after the Job execution, -The file will be deleted anyway,
149
Export Mode
Description The advantage of this export mode is to define the output file destination dynamically.
7. 8.
Click OK to validate your choice and close the [Context Export] dialog box. In the [Export Jobs] dialog box, click Finish. The dialog box closes. A progress indicator displays to show the progress percentage of the export operation. The Job is exported in the selected archive. The Talend Job is now exposed as a service into petals ESB and can be executed inside the bus.
1.
In the Job Version area, select the version number of the Job you want to export if you have created more than one version of the Job. In the Export Type area, select OSGI Bundle For ESB to export your Job as OSGI Bundle. The extension of your export automatically change to .jar as it is what Talend ESB Container is expecting.
2.
3.
Click the Browse... button to specify the folder in which exporting your Job.
150
4.
If you want to export a database table metadata entry, make sure you select the whole DB connection, and not only the relevant table as this will prevent the export process to complete correctly. 3. Right-click while maintaining the Ctrl key down and select Export items on the pop-up menu:
151
You can select additional items on the tree for exportation if required. 4. Click Browse to browse to where you want to store the exported items. Alternatively, define the archive file where to compress the files for all selected items. If you have several versions of the same item, they will all be exported. Select the Export Dependencies check box if you want to set and export routine dependencies along with Jobs you are exporting. By default, all of the user routines are selected. For further information about routines, see Section 8.1, What are routines. 5. Click Finish to close the dialog box and export the items.
152
Operation
To change value1 and value2 for --context_param respective parameters key1 and key2 key2=value2
To change a value containing space --context_param key1=path to file characters such as in a file path
3.
Click Yes to close the message and implement the changes throughout all Jobs impacted by these changes. For more information about the first way of propagating all your changes, see Section 5.3.1.2, How to update impacted Jobs automatically.
153
Click No if you want to close the message without propagating the changes. This will allow you to propagate your changes on the impacted Jobs manually on one by one basis. For more information on another way of propagating changes, see Section 5.3.1.3, How to update impacted Jobs manually.
You can open the [Update Detection] dialog box any time if you right-click the item centralized in the Repository tree view and select Manage Dependencies from the contextual menu. For more information, see Section 5.3.1.3, How to update impacted Jobs manually. 2. If needed, clear the check boxes that correspond to the Jobs you do not wish to update. You can update them any time later through the Detect Dependencies menu. For more information, see Section 5.3.1.3, How to update impacted Jobs manually. Click OK to close the dialog box and update all selected Jobs.
3.
154
1. 2.
In the Repository tree view, expand the node holding the entry you want to check what Jobs use it. Right-click the entry and select Detect Dependencies. A progress bar indicates the process of checking for all Jobs that use the modified metadata or context parameter. Then a dialog box displays to list all Jobs that use the modified item.
3.
Select the check boxes corresponding to the Jobs you want to update with the modified metadata or context parameter and clear those corresponding to the Jobs you do not want to update. Click OK to validate and close the dialog box. The Jobs that you choose not to update will be switched back to Built-in, as the link to the Repository cannot be maintained. It will thus keep their setting as it was before the change.
4.
155
2.
Enter the Job name or part of the Job name in the upper field. When you start typing your text in the field, the Job list is updated automatically to display only the Job(s) which name(s) match(es) the letters you typed in.
3.
Select the desired Job from the list and click Link Repository to automatically browse to the selected Job in the Repository tree view. If needed, click Cancel to close the dialog box and then right-click the selected Job in the Repository tree view to perform any of the available operations in the contextual menu.
4.
156
Otherwise, click OK to close the dialog box and open the selected Job on the design workspace.
2.
3.
4.
2. 3.
4.
You can also save a Job and increment its version in the same time, by clicking File > Save as.... This option does not overwrite your current Job, it saves your Job as another new Job and/or with another version. You can access a list of the different versions of a Job and perform certain operations. To do that: 1. 2. 3. 4. In the Repository tree view, select the Job you want to consult the versions of. Click Job > Version in succession to display the version list of the selected Job. Right-click the Job version you want to consult. Do one of the followings: Select Edit Job To... open the last version of the Job. This option is only available when you select the last version of the Job.
157
Documenting a Job
To... consult the Job in read-only mode. consult the hierarchy of the Job. edit Job properties. Note: The Job should not be open on the design workspace, otherwise it will be in read-only mode. This option is only available when you select the last version of the Job.
Run job/Route
You can also manage the version of several Jobs and/or metadata at the same time, as well as Jobs and their dependencies and/or child Jobs from the Project Settings. For more information, see Section 2.6.2, Version management.
2.
158
3. 4. 5.
Browse to the location where the generated documentation archive should be stored. In the same field, type in a name for the archive gathering all generated documents. Click Finish to validate the generation operation.
The archive file is generated in the defined path. It contains all required files along with the Html output file. You can open the HTML file in your favorite browser.
159
1. 2.
On the menu bar, click Window > Preferences to open the [Preferences] dialog box. Expand the Talend > Import/Export nodes in succession and select SpagoBI Server to display the relevant view.
3. 4.
Select the Enable/Disable Deploy on SpagoBI check box to activate the deployment operation. Click New to open the [Create new SpagoBi server] dialog box and add a new server to the list.
5.
Description Internal engine name used in Talend Open Studio for Data Integration. This name is not used in the generated code. Free text to describe the server entry you are recording. IP address or host name of the machine running the SpagoBI server. User name required to log on to the SpagoBI server. Password for SpagoBI server logon authentication.
Click OK to validate the details of the new server entry and close the dialog box.
160
The newly created entry is added to the table of available servers. You can add as many SpagoBI entries as you need. 7. Click Apply and then OK to close the [Preferences] dialog box.
1. 2. 3. 4.
In the Repository tree view, expand Job Designs and right-click the Job to deploy. In the drop-down list, select Deploy on SpagoBI. As for any Job export, select a Name for the Job archive that will be created and fill it in the To archive file field. Select the relevant SpagoBI server on the drop-down list.
161
5. 6. 7.
The Label, Name and Description fields come from the Job main properties. Select the relevant context in the list. Click OK once you have completed the setting operation.
The Jobs are now deployed onto the relevant SpagoBI server. Open your SpagoBI administrator to execute your Jobs.
162
This figure presents the interface of tMap. That of tXMLMap differs slightly in appearance. For example, in addition to the Schema editor and the Expression editor tabs on the lower part of this interface, tXMLMap has a third tab called Tree schema editor. For further information about tXMLMap, see Section 6.3, tXMLMap operation. The Map Editor is made of several panels: The Input panel is the top left panel on the editor. It offers a graphical representation of all (main and lookup) incoming data flows. The data are gathered in various columns of input tables. Note that the table name reflects the main or lookup row from the Job design on the design workspace. The Variable panel is the central panel in the Map Editor. It allows the centralization of redundant information through the mapping to variable and allows you to carry out transformations. The Output panel is the top right panel on the editor. It allows mapping data and fields from Input tables and Variables to the appropriate Output rows.
164
tMap operation
Both bottom panels are the Input and Output schemas description. The Schema editor tab offers a schema view of all columns of input and output tables in selection in their respective panel. Expression editor is the edition tool for all expression keys of Input/Output data, variable expressions or filtering conditions. The name of input/output tables in the Map Editor reflects the name of the incoming and outgoing flows (row connections). The following sections present separately tMap and tXMLMap.
tMap uses incoming connections to pre-fill input schemas with data in the Map Editor. Therefore, you cannot create new input schemas directly in the Map Editor. Instead, you need to implement as many Row connections incoming to tMap component as required, in order to create as many input schemas as needed. The same way, create as many output row connections as required. However, you can fill in the output with content directly in the Map Editor through a convenient graphical editor. Note that there can be only one Main incoming rows. All other incoming rows are of Lookup type. Related topic: Section 4.3.1.1, Row connection.
165
Lookup rows are incoming connections from secondary (or reference) flows of data. These reference data might depend directly or indirectly on the primary flow. This dependency relationship is translated with a graphical mapping and the creation of an expression key. The Map Editor requires the connections to be implemented in your Job in order to be able to define the input and output flows in the Map Editor. You also need to create the actual mapping in your Job in order to display the Map Editor in the Preview area of the Basic settings view of the tMap component.
To open the Map Editor in a new window, double-click the tMap icon in the design workspace or click the threedot button next to the Map Editor in the Basic settings view of the tMap component. The following sections give the information necessary to use the tMap component in any of your Job designs.
166
Although you can use the up and down arrows to interchange Lookup tables order, be aware that the Joins between two lookup tables may then be lost. Related topic: Section 6.2.1.2, How to use Explicit Join.
167
Variables
You can use global or context variables or reuse the variable defined in the Variables area. Press Ctrl+Space bar to access the list of variables. This list gathers together global, context and mapping variables. The list of variables changes according to the context and grows along new variable creation. Only valid mappable variables in the context show on the list.
Docked at the Variable list, a metadata tip box display to provide information about the selected column. Related topic: Section 6.2.2, Mapping variables
168
Simply drop column names from one table to a subordinate one, to create a Join relationship between the two tables. This way, you can retrieve and process data from multiple inputs. The join displays graphically as a purple link and creates automatically a key that will be used as a hash key to speed up the match search. You can create direct joins between the main table and lookup tables. But you can also create indirect joins from the main table to a lookup table, via another lookup table. This requires a direct join between one of the Lookup table to the Main one. You cannot create a Join from a subordinate table towards a superior table in the Input area. The Expression key field which is filled in with the dragged and dropped data is editable in the input schema, whereas the column name can only be changed from the Schema editor panel. You can either insert the dragged data into a new entry or replace the existing entries or else concatenate all selected data into one cell.
For further information about possible types of drag and drops, see Section 6.2.4, Mapping the Output setting .
169
If you have a big number of input tables, you can use the minimize/maximize icon to reduce or restore the table size in the Input area. The Join binding two tables remains visible even though the table is minimized. Creating a Join automatically assigns a hash key onto the joined field name. The key symbol displays in violet on the input table itself and is removed when the Join between the two tables is removed. Related topics: Section 6.2.5, Setting schemas in the Map Editor Section 6.2.1.3, How to use Inner Join Along with the explicit Join you can select whether you want to filter down to a unique match or if you allow several matches to be taken into account. In this last case, you can choose to consider only the first or the last match or all of them. To define the match model for an explicit Join: 1. 2. 3. Click the tMap settings button at the top of the table to which the Join links to display the table properties. Click in the Value field corresponding to Match Model and then click the three-dot button that appears to open the [Options] dialog box. In the [Options] dialog box, double-click the wanted match model, or select it and click OK to validate the setting and close the dialog box.
Unique Match
This is the default selection when you implement an explicit Join. This means that only the last match from the Lookup flow will be taken into account and passed on to the output. The other matches will be then ignored.
First Match
This selection implies that several matches can be expected in the lookup. The First Match selection means that in the lookup only the first encountered match will be taken into account and passed onto the main output flow.
170
All Matches
This selection implies that several matches can be expected in the lookup flow. In this case, all matches are taken into account and passed on to the main output flow.
3.
171
An Inner Join table should always be coupled to an Inner Join Reject table. For how to define an output table as an Inner Join Reject table, see Section 6.2.4.4, Lookup Inner Join rejection. You can also use the filter button to decrease the number of rows to be searched and improve the performance (in Java). Related topics: Section 6.2.4.4, Lookup Inner Join rejection Section 6.2.1.5, How to filter an input flow
172
Mapping variables
In the Filter field, type in the condition to be applied. This allows to reduce the number of rows parsed against the main flow, enhancing the performance on long and heterogeneous flows. You can use the Auto-completion tool via the Ctrl+Space bar keystrokes in order to reuse schema columns in the condition statement.
There are various possibilities to create variables: Type in freely your variables in Java. Enter the strings between quotes or concatenate functions using the relevant operator. Add new lines using the plus sign and remove lines using the red cross sign. And press Ctrl+Space to retrieve existing global and context variables. Drop one or more Input entries to the Var table.
173
Select an entry on the Input area or press Shift key to select multiple entries of one Input table. Press Ctrl to select either non-appended entries in the same input table or entries from various tables. When selecting entries in the second table, notice that the first selection displays in grey. Hold the Ctrl key down to drag all entries together. A tooltip shows you how many entries are in selection. Then various types of drag-and-drops are possible depending on the action you want to carry out. To... You need to...
Insert all selected entries as separated Simply drag & drop to the Var table. Arrows show you where the variables. new Var entry can be inserted. Each Input is inserted in a separate cell. Concatenate all selected input entries together Drag & drop onto the Var entry which gets highlighted. All entries with an existing Var entry. gets concatenated into one cell. Add the required operators using Java operations signs. The dot concatenates string variables. Overwrite a Var entry concatenated Input entries. with selected Drag & drop onto the relevant Var entry which gets highlighted then press Ctrl and release. All selected entries are concatenated and overwrite the highlighted Var.
Concatenate selected input entries with Drag & drop onto an existing Var then press Shift when browsing highlighted Var entries and create new Var over the chosen Var entries. First entries get concatenated with the lines if needed highlighted Var entries. And if necessary new lines get created to hold remaining entries.
174
3.
Enter the Java code according to your needs. The corresponding expression in the output panel is synchronized. Refer to the Java documentation for more information regarding functions and operations.
To open the [Expression Builder] dialog box, click the three-dot button next to the expression you want to open in the Var or Output panel of the Map Editor.
175
For a use case showing the usage of the expression editor, see the following section.
Two input flows are connected to the tMap component. From the DB input, comes a list of names made of a first name and a last name separated by a space char. From the File input, comes a list of US states, in lower case. In the tMap, use the expression builder to: First, replace the blank char separating the first and last names with an underscore char, and second, change the states from lower case to upper case. 1. In the tMap, set the relevant inner join to set the reference mapping. For more information regarding tMap, see Section 6.2, tMap operation and Section 6.1, tMap and tXMLMap interfaces.
176
2.
From the main (row1) input, drop the Names column to the output area, and the State column from the lookup (row2) input towards the same output area. Then click in the first Expression field (row1.Name) to display the three-dot button.
3.
4.
In the Category area, select the relevant action you want to perform. In this example, select StringHandling and select the EREPLACE function. In the Expression area, paste row1.Name in place of the text expression, in order to get: StringHandling.EREPLACE(row1.Name," ","_"). This expression will replace the separating space char with an underscore char in the char string given. Now check that the output is correct, by typing in the relevant Value field of the Test area, a dummy value, e.g: Chuck Norris and clicking Test!. The correct change should be carried out, for example, Chuck_Norris. Click OK to validate the changes, and then proceed with the same operation for the second column (State). In the tMap output, select the row2.State Expression and click the [...] button to open the Expression builder again.
5.
6.
7. 8.
177
This time, the StringHandling function to be used is UPCASE. The complete expression says: StringHandling.UPCASE(row2.State). 9. Once again, check that the expression syntax is correct using a dummy Value in the Test area, for example indiana. The Test! result should display INDIANA for this example. Then, click OK to validate the changes. Both expressions are now displayed in the tMap Expression field.
These changes will be carried out along the flow processing. The output of this example is as shown below.
178
To... Add an independent table. Create a join between output tables. In order to do so, select in the drop down list the table from which you want to create the join. In the Named field, type in the name of the table to be created.
Unlike the Input area, the order of output schema tables does not make such a difference, as there is no subordination relationship between outputs (of Join type). Once all connections, hence output schema tables, are created, you can select and organize the output data via drag & drops. You can drop one or several entries from the Input area straight to the relevant output table. Press Ctrl or Shift, and click entries to carry out multiple selection. Or you can drag expressions from the Var area and drop them to fill in the output schemas with the appropriate reusable data. Note that if you make any change to the Input column in the Schema Editor, a dialog prompts you to decide to propagate the changes throughout all Input/Variable/Output table entries, where concerned. Action Drag & Drop onto existing expressions. Drag & Drop to insertion line. Drag & Drop + Ctrl. Drag & Drop + Shift. Result Concatenates the selected expression with the existing expressions. Inserts one or several new entries at start or end of table or between two existing lines. Replaces highlighted expression with selected expression. Adds the selected fields to all highlighted expressions. Inserts new lines if needed.
179
Result Replaces all highlighted expressions with selected fields. Inserts new lines if needed.
6.2.4.2. Filters
Filters allow you to make a selection among the input fields, and send only the selected fields to various outputs. Click the [+] button at the top of the table to add a filter line.
You can enter freely your filter statements using Java operators and functions. Drop expressions from the Input area or from the Var area to the Filter row entry of the relevant Output table.
An orange link is then created. Add the required Java operator to finalize your filter formula. You can create various filters on different lines. The AND operator is the logical conjunction of all stated filters.
180
The Reject principle concatenates all non Reject tables filters and defines them as an ELSE statement. To define an output table as the Else part of the regular tables: 1. 2. Click the tMap settings button at the top of the output table to display the table properties. Click in the Value field corresponding to Catch output reject and then click the [...] button that appears to display the [Options] dialog box. In the [Options] dialog box, double-click true, or select it and click OK to validate the setting and close the dialog box.
3.
You can define several Reject tables, to offer multiple refined outputs. To differentiate various Reject outputs, add filter lines, by clicking on the plus arrow button. Once a table is defined as Reject, the verification process will be first enforced on regular tables before taking in consideration possible constraints of the Reject tables. Note that data are not exclusively processed to one output. Although a data satisfied one constraint, hence is routed to the corresponding output, this data still gets checked against the other constraints and can be routed to other outputs.
3.
181
182
A new table called ErrorReject appears in the output area of the Map Editor. This output table automatically comprises two columns: errorMessage and errorStackTrace, retrieving the message and stack trace of the error encountered during the Job execution. Errors can be unparseable dates, null pointer exceptions, conversion issues, etc. You can also drag and drop columns from the input tables to this error reject output table. Those erroneous data can be retrieved with the corresponding error messages and thus be corrected afterward.
Once the error reject table is set, its corresponding flow can be sent to an output component.
183
To do so, on the design workspace, right-click the tMap component, select Row > ErrorReject in the menu, and click the corresponding output component, here tLogRow. When you execute the Job, errors are retrieved by the ErrorReject flow.
The result contains the error message, its stack trace, and the two columns, id and date, dragged and dropped to the ErrorReject table, separated by a pipe |.
184
1. 2.
Click the tMap Settings button at the top of the table to display the table properties. Click in the Value field of Schema Type, and then click the three-dot button that appears to open the [Options] dialog box.
3.
In the [Options] dialog box, double-click Repository, or select it and click OK, to close the dialog box and display the Schema Id property beneath Schema Type. If you close the Map Editor now without specifying a Repository schema item, the schema type changes back to Built-In.
4.
Click in the Value field of Schema Id, and then click the [...] button that appears to display the [Repository Content] dialog box. In the [Repository Content] dialog box, select your schema as you define a centrally stored schema for any component, and then click OK. The Value field of Schema Id is filled with the schema you just selected, and everything in the Schema editor panel for this table becomes read-only.
5.
185
Changing the schema type of the subordinate table across a Join from Built-In to Repository causes the Join to get lost. Changes to the schema of a table made in the Map Editor are automatically synchronized to the schema of the corresponding component connected with the tMap component.
Use the tool bar below the schema table, to add, move or remove columns from the schema. You can also load a schema from the repository or export it into a file.
186
Description Column name as defined on the Map Editor schemas and on the Input or Output component schemas. The Key shows if the expression key data should be used to retrieve data through the Join link. If unchecked, the Join relation is disabled. Type of data: String, Integer, Date, etc. This column should always be defined in a Java version.
-1 shows that no length value has been defined in the schema. precise the length value if any is defined. Clear this check box if the field value should not be null. Shows any default value that may be defined for this field. Free text field. Enter any useful comment.
Input metadata and output metadata are independent from each other. You can, for instance, change the label of a column on the output side without the column label of the input schema being changed. However, any change made to the metadata are immediately reflected in the corresponding schema on the tMap relevant (Input or Output) area, but also on the schema defined for the component itself on the design workspace. A Red colored background shows that an invalid character has been entered. Most special characters are prohibited in order for the Job to be able to interpret and use the text entered in the code. Authorized characters include lowercase, upper-case, figures except as start character.
3. 4.
5.
187
For this option to be fully activated, you also need to specify the directory on the disk, where the data will be stored, and the buffer size, namely the number of rows of data each temporary file will contain. You can set the temporary storage directory and the buffer size either in the Map Editor or in the tMap component property settings. To set the temporary storage directory and the buffer size in the Map Editor: 1. 2. 3. 4. Click the Property Settings button at the top of the input area to display the [Property Settings] dialog box. In [Property Settings] dialog box, fill the Temp data directory path field with the full path to the directory where the temporary data should be stored. In the Max buffer size (nr of rows) field, specify the maximum number of rows each temporary file can contain. The default value is 2,000,000. Click OK to validate the settings and close the [Property Settings] dialog box.
To set the temporary storage directory in the tMap component property settings without opening the Map Editor: 1. Click the tMap component to select it on the design workspace, and then select the Component tab to show the Basic settings view.
188
Handling Lookups
2.
In the Store on disk area, fill the Temp data directory path field with the full path to the directory where the temporary data should be stored. Alternatively, you can use a context variable through the Ctrl+Space bar if you have set the variable in a Context group in the repository. For more information about contexts, see Section 4.4.2, How to centralize contexts and variables.
At the end of the subjob, the temporary files are cleared. This way, you will limit the use of allocated memory per reference data to be written onto temporary files stored on the disk. As writing the main flow onto the disk requires the data to be sorted, note that the order of the output rows cannot be guaranteed. On the Advanced settings view, you can also set a buffer size if needed. Simply fill out the field Max buffer size (nb of rows) in order for the data stored on the disk to be split into as many files as needed.
189
tXMLMap operation
2. 3.
Click in the Value field corresponding to Lookup Model, and then click the [...] button to display the [Options] dialog box. In the [Options] dialog box, double-click the wanted loading mode, or select it and then click OK, to validate the setting and close the dialog box.
For use cases using these options, see the tMap section of the Talend Open Studio Components Reference Guide. When your lookup is a database table, the best practise is to open the connection to the database in the beginning of your job design in order to optimize performance. For a use case using this option, see tMap in Talend Open Studio Components Reference Guide.
190
data rejecting. Like tMap, a map editor is required to configure these operations. To open this map editor, you can double-click the tXMLMap icon in the design workspace, or alternatively, click the three-dot button next to the Map Editor in the Basic settings view of the tXMLMap component. tXMLMap and tMap use the common approaches to accomplish most of these operations. Therefore, the following sections explain only the particular operations to which tXMLMap is dedicated for processing the hierarchical XML data. The operations focusing on hierarchical data are: using the Document type to create the XML tree of interest; managing the output XML data; editing the XML tree schema. The following sections present more relevant details. Different from tMap, tXMLMap does not provide the Store temp data option for storing temporary data onto the directory of your disk. For further information about this option of tMap, see Section 6.2.6, Solving memory limitation issues in tMap use.
In practice for most cases, tXMLMap retrieves the schema of its preceding or succeeding components, for example, from a tFileInputXML component or in the ESB use case, from a tESBProviderRequest component.
191
This avoids many manual efforts to set up the Document type for the XML flow to be processed. However, to continue to modify the XML structure as the content of a Document row, you need still to use the given Map editor. Be aware that a Document flow carries a user-defined XML tree and is no more than one single field of a schema, which, same as the other schemas, may contain different data types between each field. For further information about how to set a schema, see Section 4.2.6.1, Basic Settings tab. Once the Document type is set up for a row of data, in the corresponding data flow table in the map editor, a basic XML tree structure is created automatically to reflect the details of this structure. This basic structure represents the minimum element required by a valid XML tree in using tXMLMap: The root element: it is the minimum element required by an XML tree to be processed and when needs be, the foundation to develop a sophisticated XML tree. The loop element: it determines the element over which the iteration takes place to read the hierarchical data of an XML tree. By default, the root element is set as loop element.
This figure gives an example with the input flow, Customer. Based on this generated XML root tagged as root by default, you can develop the XML tree structure of interest. To do this, you need to: 1. Import the custom XML tree structure from one of the following types of sources: XML or XSD files (related topic: Section 6.3.1.2, How to import the XML tree structure from XML and XSD files) file XML connections created and stored in the Repository of your Studio (related topic: Section 6.3.1.3, How to import the XML tree structure from the Repository). If needs be, you can develop the XML tree of interest manually using the options provided on the contextual menu. 2. Reset the loop element for the XML tree you are creating, if needs be. You can set as many loops as you need to. At this step, you may have to consider the following situations: If you have to create several XML trees, you need to define the loop element for each of them. If you import the XML tree from the Repository, the loop element will have been set depending on the set of the source structure. But you can still reset the loop element. For further details, see Section 6.3.1.4, How to set or reset a loop element for an imported XML structure If needed, you can continue to modify the imported XML tree using the options provided in the contextual menu. The following table presents the operations you can perform through the available options. Options Create Sub-element and Create Attribute Operations Add elements or attributes to develop an XML tree. Related topic: Section 6.3.1.5, How to add a sub-element or an attribute to an XML tree structure Add and manage given namespaces on the imported XML tree. Related topic: Section 6.3.1.7, How to manage a namespace
Set a namespace
192
Options Delete
Operations Delete an element or an attribute. Related topic: Section 6.3.1.6, How to delete an element or an attribute from the XML tree structure Rename an element or an attribute. Set or reset an element as loop element. Multiple loop elements and optional loop element are supported. This option is not available unless to the loop element you have defined. When the corresponding element exists in the source file, an optional loop element works the same way as a normal loop element; otherwise, it resets automatically its parent element as loop element or in absence of parent element in the source file, it takes the element of the higher level until the root element. But in the real-world practice, with such differences between the XML tree and the source file structure, we recommend adapting the XML tree to the source file for better performance.
As group element
On the XML tree of the output side, set an element as group element. Related topic: Section 6.3.1.8, How to group the output data On the XML tree of the output side, set an element as aggregate element. Related topic: Section 6.3.1.9, How to aggregate the output data Set the Choice element. Then all of its child elements developed underneath will be contained in this declaration. When tXMLMap processes a choice element, the elements contained in its declaration will not be outputted unless their mapping expressions are appropriately defined. Set the Substitution element to specify the element substitutable for a given head element defined in the corresponding XSD. When tXMLMap processes a substitution element, the elements contained in its declaration will not be outputted unless their mapping expressions are appropriately defined.
As aggregate element
Add Choice
Set as Substitution
The following sections present more details about the process of creating the XML tree.
6.3.1.2. How to import the XML tree structure from XML and XSD files
To import the XML tree structure from an XML file, proceed as follows: 1. In the input flow table of interest, right-click the column name to open the contextual menu. In this example, it is Customer.
2.
193
3.
In the pop-up dialog box, browse to the XML file you need to use to provide the XML tree structure of interest and double-click the file.
To import the XML tree structure from an XSD file, proceed as follows: 1. In the input flow table of interest, right-click the column name to open the contextual menu. In this example, it is Customer.
2. 3.
From this menu, select Import From File. In the pop-up dialog box, browse to the XSD file you need to use to provide the XML tree structure of interest and double-click the file. In the dialog box that appears, select an element from the Root list as the root of your XML tree, and click OK.
4.
The root of the imported XML tree is adaptable: When importing either an input or an output XML tree structure from an XSD file, you can choose an element as the root of your XML tree. Once an XML structure is imported, the root tag is renamed automatically with the name of the XML source. To change this root name manually, you need use the tree schema editor. For further information about this editor, see Section 6.3.3, Editing the XML tree schema. Then, you need to define the loop element in this XML tree structure. For further information about how to define a loop element, see Section 6.3.1.4, How to set or reset a loop element for an imported XML structure.
6.3.1.3. How to import the XML tree structure from the Repository
To do this, proceed as follows: 1. In the input flow table of interest, right click the column name to open the contextual menu. In this example, it is Customer.
2. 194
From this menu, select Import From Repository. Talend Open Studio for Data Integration User Guide
3.
In the pop-up repository content list, select the XML connection or the MDM connection of interest to import the corresponding XML tree structure.
This figure presents an example of this Repository-stored XML connection. To import an XML tree structure from the Repository, the corresponding XML connection should have been created. For further information about how to create a file XML connection in the Repository, see Section 7.8, Setting up an XML file schema. 4. Click OK to validate this selection.
The XML tree structure is created and a loop is defined automatically as this loop was already defined during the creation of the current Repository-stored XML connection.
6.3.1.4. How to set or reset a loop element for an imported XML structure
You need to set at least one loop element for each XML tree if it does not have any. If it does, you may have to reset the existing loop element when needs be. Whatever you need to set or reset a loop element, proceed as follows: 1. In the created XML tree structure, right-click the element you need to define as loop. For example, you need to define the Customer element as loop in the following figure.
2.
From the pop-up contextual menu, select As loop element to define the selected element as loop. Once done, this selected element is marked with the text: loop:true.
195
If you close the Map Editor without having set the required loop element for a given XML tree, its root element will be set automatically as loop element.
2.
In the pop-up [Create New Element] wizard, type in the name you need to use for the added sub-element or attribute.
196
3.
Click OK to validate this creation. The new sub-element or attribute displays in the XML tree structure you are editing.
6.3.1.6. How to delete an element or an attribute from the XML tree structure
From an established XML tree, you may need to delete an element or an attribute. To do this, proceed as follows: 1. In the XML tree you need to edit, right-click the element or the attribute you need to delete.
2.
In the pop-up contextual menu, select Delete. Then the selected element or attribute is deleted, including all of the sub-elements or the attributes attached to it underneath.
Defining a namespace
To do this, proceed as follows: 1. In the XML tree of the input or the output data flow you need to edit, right click the element for which you need to declare a namespace. For example, in a Customer XML tree of the output flow, you need to set a namespace for the root.
197
2. 3.
In the pop-up contextual menu, select Set a namespace. Then the [Namespace dialog] wizard displays. In this wizard, type in the URI you need to use.
4.
If you need to set a prefix for this namespace you are editing, select the Prefix check box in this wizard and type in the prefix you need. In this example, we select it and type in xhtml.
5.
198
2. 3. 4.
In this menu, select Set A Fixed Prefix to open the corresponding wizard. Type in the new default value you need in this wizard. Click OK to validate this modification.
Deleting a namespace
To do this, proceed as follows: 1. In the XML tree that the namespace you need to edit belongs to, right-click this namespace to open the contextual menu.
2.
199
The group element processes the data always within one single flow. The aggregate element splits this flow into separate and complete XML flows.
200
Then this element becomes the aggregate element. Texts in red are added to it, reading aggregate : true. The following figure presents an example.
2.
To revoke the definition of the aggregate element, simply right-click the defined aggregate element and from the contextual menu, select Remove aggregate element. To define an element as aggregate element, ensure that this element has no child element and the All in one feature is being disabled. The As aggregate element option is not available in the contextual menu until both of the conditions are respected. For further information about the All in one feature, see Section 6.3.2.1, How to output elements into one document.
For an example about how to use the aggregate element with tXMLMap, see Talend Open Studio Components Reference Guide tXMLMap provides group element and aggregate element to classify data in the XML tree structure. When handling one row of data ( one complete XML flow), the behavioral difference between them is: The group element processes the data always within one single flow. The aggregate element splits this flow into separate and complete XML flows.
201
2.
Click the All in one field and from the drop-down list, select true or false to decide whether the output XML flow should be one single flow. If you select true, the XML data is output all in one single flow. In this example, the single flow reads as follows:
202
If you select false, the XML data is output in separate flows, each loop being one flow, neither grouped nor aggregated. In this example, these flows read as follows:
203
Each flow contains one complete XML structure. To take the first flow as example, its structure reads:
The All in one feature is disabled if you are using the aggregate element. For further information about the aggregate element, see Section 6.3.1.9, How to aggregate the output data
204
2.
In the panel, click the Create empty element field and from the drop-down list, select true or false to decide whether to output the empty element. If you select true, the empty element is created in the output XML flow and output, for example, <customer><LabelState/></customer>. If you select false, the empty element is not output.
For example, in this figure, the types element is the primary loop and the outputted data will be sorted by the values of this element.
205
In this case of receiving several input loop elements, a [...] button appears next to this receiving loop element or for the flat data, appears on the head of the table representing the flat data flow. To define the loop sequence, do the following: 1. Click this [...] button to open the sequence arrangement window as presented by the figure used earlier in this section. Use the up or down flash button to arrange this sequence.
2.
206
The left half of this view is used to edit the tree schema of the input flow and the right half to edit the tree schema of the output flow. The following table presents further information about this schema editor. Metadata XPath Key Type Description Use it to display the absolute paths pointing to each element or attribute in a XML tree and edit the name of the corresponding element or attribute. Select the corresponding check box if the expression key data should be used to retrieve data through the Join link. If unchecked, the Join relation is disabled. Type of data: String, Integer, Document, etc. This column should always be defined in a Java version. Nullable Pattern Select this check box if the field value could be null. Define the pattern for the Date data type.
Input metadata and output metadata are independent from each other. You can, for instance, change the label of a column on the output side without the column label of the input schema being changed. However, any change made to the metadata are immediately reflected in the corresponding schema on the tXMLMap relevant (Input or Output) area, but also on the schema defined for the component itself on the design workspace. For detailed use cases about the multiple operations that you can perform using tXMLMap, see Talend Open Studio Components Reference Guide.
207
Objectives
7.1. Objectives
The Metadata folder in the Repository tree view stores reusable information on files, databases, and/or systems that you need to build your Jobs. Various corresponding wizards help you store these pieces of information and use them later to set the connection parameters of the relevant input or output components, but you can also store the data description called schemas in Talend Open Studio for Data Integration. Wizards procedures slightly differ depending on the type of connection chosen. Click Metadata in the Repository tree view to expand the folder tree. Each of the connection nodes will gather the various connections and schemas you have set up.
From Talend Open Studio for Data Integration, you can set up the following, amongst others: a DB connection, a JDBC schema, a SAS connection, a file schema, an LDAP schema, a salesforce schema, a generic schema, a MDM connection, a WSDL schema,
210
Setting up a DB connection
a FTP connection, The following sections explain in detail how to set up different connections and schemas.
2.
3.
Click Next when completed. The second step requires you to fill in DB connection data.
211
Step 2: Connection
When you are creating the database connection of some databases like AS400, HSQDB, Informix, MsSQL, MySQL, Oracle, Sybase, or Teradata, you can specify additional connection properties through the Additional parameters field in the Database Settings area. 2. Fill in the connection details and click Check to check your connection. In order to be able to retrieve all table schemas in the database: -enter dbo in the Schema field if you are connecting to MSSQL 2000, -remove dbo from the Schema field if you are connecting to MSSQL 2005/2008. 212 Talend Open Studio for Data Integration User Guide
3.
Fill in, if need be, the database properties information. That is all for the first operation on DB connection setup, click Finish to validate. The newly created DB connection is now available in the Repository tree view and displays four folders including Queries (for SQL queries you save) and Table schemas that will gather all schemas linked to this DB connection.
4.
Right-click the newly created connection, and select Retrieve schema on the drop-down list in order to load the desired table schema from the established connection. An error message will display if there are no tables to retrieve from the selected database or if you do not have the correct rights to access this database.
213
In the Select Filter Conditions area, you can filter the database objects using either of the two options: Set the Name Filter or Use the Sql Filter through filtering on objects names or using SQL queries respectively. To filter database objects using their names, do the following: 1. 2. In the Select Filter Conditions area, select the Use the Name Filter option. In the Select Types area, select the check box(es) of the database object(s) you want to filter or display. Available options can vary according to the selected database. 3. 4. In the Set the Name Filter area, click Edit... to open the [Edit Filter Name] dialog box. Enter the filter you want to use in the dialog box. For example, if you want to recuperate the database objects which names start with A, enter the filter A%, or if you want to recuperate all database objects which names end with type, enter %type as your filter. Click OK to close the dialog box. Click Next to open a new view on the wizard that lists the filtered database objects.
5. 6.
To filter database objects using an SQL query, do the following: 1. 2. 3. In the Select Filter Conditions area, select the Use Sql Filter option. In the Set the Sql Filter field, enter the SQL query you want to use to filter database objects. Click Next to open a new view that lists the filtered database objects. Once you have the filtered list of the database objects (tables, views and synonyms), do the following to load the schemas of the desired objects onto your repository file system:
214
1.
Select one or more database objects on the list and click Next to open a new view on the wizard where you can see the schemas of the selected object. If no schema is visible on the list, click the Check connection button below the list to verify the database connection status.
215
2.
Modify the schemas if needed and then click Finish to close the wizard. The schemas based on the selected tables are listed under the Table schemas folder corresponding to the database connection you created. In Java, make sure the data type in the Type column is correctly defined. For more information regarding data types, including date pattern, check out http://docs.oracle.com/ javase/6/docs/api/index.html.
216
Click Finish to complete the DB schema creation. All the retrieved schemas are displayed in the Table schemas sub-folder under the relevant DB connection node.
217
Step 2: Connection
2.
Fill in the connection details below: Fill in the JDBC URL used to access the DB server. In the Driver jar field, select the jar driver validating your connection to the database. In the Class name field, fill in the main class of the driver allowing to communicate with the database. Fill in your User name and Password. In the Mapping File field, select the mapping allowing the DB Type to match the Java type of data on the schema. For example: the DB Type VARCHAR corresponds to the String Type for Java. The Mapping files are XML files that you can access via Window > Preferences > Talend > Metadata of TalendType.
3. 4.
Click Check to check out your connection. Fill in, if need be, the database properties information. That is all for the first operation on DB connection setup, click Finish to validate.
218
The newly created DB connection is now available in the Repository tree view and it displays four folders including Queries (for the SQL queries you save) and Table schemas that will gather all schemas linked to this DB connection.
5.
Right-click the newly created connection, and select Retrieve schema on the drop-down list.
219
Click Finish to complete the DB schema creation. All the retrieved schemas are displayed in the Table schemas sub-folder under the relevant DB connection node.
7.4.1. Prerequisites
Before carrying on the below procedure to configure your SAS connection, make sure that you retrieve your metadata from the SAS server and export it in XML format.
2. 3.
220
Step 2: Connection
2. 3. 4.
If needed, click the Check tab to verify if your connection is successful. If needed, define the properties of the database in the corresponding fields in the Database Properties area. Click Finish to validate your changes and close the wizard. The newly set connection to the defined database displays under the DB Connections folder in the Repository tree view. This connection has four sub-folders among which Table schemas can group all schemas relative to this connection.
221
5.
Right-click the SAS connection you created and then select Retrieve Schema from SAS to display all schemas in the defined database under the Table schemas sub-folder.
Unlike the DB connection wizard, the [New Delimited File] wizard gathers both file connection and schema definitions in a four-step procedure.
222
Select the OS Format the file was created in. This information is used to prefill subsequent step fields. If the list doesnt include the appropriate format, ignore it. The File viewer gives an instant picture of the file loaded. It allows you to check the file consistency, the presence of header and more generally the file structure. Click Next to proceed to Step3.
223
Set the Encoding, as well as Field and Row separators in the Delimited File Settings.
Depending on your file type (csv or delimited), you can also set the Escape and Enclosure characters to be used. If the file preview shows a header message, you can exclude the header from the parsing. Set the number of header rows to be skipped. Also, if you know that the file contains footer information, set the number of footer lines to be ignored.
224
The Limit of rows allows you to restrict the extend of the file being parsed. In the File Preview panel, you can view the new settings impact. Check the Set heading row as column names box to transform the first parsed row as labels for schema columns. Note that the number of header rows to be skipped is then incremented by 1.
Click Refresh on the preview panel for the settings to take effect and view the result on the viewer.
225
If the Delimited file which the schema is based on has been changed, use the Guess button to generate again the schema. Note that if you customized the schema, the Guess feature does not retain these changes. Click Finish. The new schema is displayed under the relevant File Delimited connection node in the Repository tree view. For further information about how to drop component metadata onto the workspace, see Section 4.2.2.2, How to drop components from the Metadata node.
226
Proceed the same way as for the file delimited connection. Right-click Metadata in the Repository tree view and select Create file positional.
The file viewer shows a file preview and allows you to place your position markers.
227
Click the file preview and set the markers against the ruler. The orange arrow helps you refine the position. The Field length field lists a series of figures separated by commas, these are the number of characters between the separators. The asterisk symbol means all remaining characters on the row, starting from the preceding marker position. The Marker Position shows the exact position of the marker on the ruler. You can change it to specify the position precisely. You can add as many markers as needed. To remove a marker, drag it towards the ruler. Click Next to continue.
228
Proceed the same way as for the file delimited or positional connection. Right-click Metadata in the Repository tree view and select Create file regex.
Like for Delimited File schema creation, the format is requested for pre-fill purpose of next step fields. If the file creation OS format is not offered in the list, ignore this field. The file viewer gives an instant picture of the loaded file. Click Next to define the schema structure.
229
Make sure to include the Regex code in single or double quotes accordingly. Then click Refresh preview to take the changes into account. The button changes to Wait until the preview is refreshed.
Click Next when setting is complete. The last step generates the Regex File schema.
230
This is done in the same way as for the connection of delimited or positional files. According to the option you select, the wizard helps you create either an input or an output schema. In a Job, the tFileInputXML component uses the input schema created to read XML files, whereas tAdvancedFileOutputXML uses the output schema created to either write an XML file, or to update an existing XML file. Step 1, in which you enter the general properties of the schema to be created, precedes the step at which you set the type of schema as either input or output. It is therefore advisable to enter names which will help you to distinguish between your input and output schemas. For further information about reading an XML file, see Section 7.8.1, Setting up an XML schema for an input file. For further information about writing an XML file, see Section 7.8.2, Setting up an XML schema for an output file.
3.
Enter the generic schema information, such as its Name and Description.
231
4.
2.
232
2. 3.
Enter the Encoding type in the corresponding field if the system does not detect it automatically. In the Limit field, enter the number of columns on which the XPath query is to be executed, or 0 if you want to run it against all of the columns. Click Next to define the schema parameters.
4.
233
1.
Click Browse... and browse your directory to the XSD file to be uploaded. Alternatively, enter the access path to the file. In the dialog box the appears, select an element from the Root list as the root of your XML tree, and click OK.
2.
The Schema Viewer area displays a preview of the XML structure. You can expand and visualize every level of the files XML tree structure.
3. 4.
Enter the Encoding type in the corresponding field if the system does not detect it automatically. In the Limit field, enter the number of columns on which the XPath query is to be executed, or 0 if you want to run it against all of the columns. Click Next to define the schema parameters.
5.
234
The schema definition window is composed of four views: View Source Schema Target Schema Preview Description Tree view of the XML file. Extraction and iteration information. Preview of the target schema, together with the input data of the selected columns displayed in the defined order. The preview functionality is not available if you loaded an XSD file. File Viewer Preview of the brute data.
First define an Xpath loop and the maximum number of times the loop can run. To do so:
235
1.
Populate the XPath loop expression field with the absolute XPath expression for the node to be iterated upon. There are two ways to do this, either: enter the absolute XPath expression for the node to be iterated upon (Enter the full expression or press Ctrl+Space to use the autocompletion list), or drop a node from the tree view under Source schema onto the Absolute XPath expression field. An orange arrow links the node to the corresponding expression.
2.
In the Loop limit field, specify the maximum number of times the selected node can be iterated.
Next, it is necessary to define the fields to be extracted. To do so: 1. Drop the node(s) of interest from the Source Schema tree onto the Relative or absolute XPath expression. You can select several nodes to drop on the table by pressing Ctrl or Shift and clicking the nodes of interest. The arrow linking an individual node selected on the Source Schema to the Fields to extract table are blue in colour. The other ones are gray.
2.
If needed, you can add as many columns to be extracted as necessary, delete columns or change the column order using the toolbar: Add or delete a column using the and buttons. and buttons.
236
3. 4.
In the Column name fields, enter labels for the columns to be displayed in the schema Preview area. Click Refresh Preview to display a preview of the target schema. The fields are consequently displayed in the schema according to the defined order. The preview functionality is not available if you loaded an XSD file.
5.
237
1. 2. 3.
and and
buttons to add or delete selected columns. buttons to modify the order of the columns.
The new schema, along with it columns, appears under the File XML node in the Repository tree view.
238
2.
Right click File XML, and select Create file XML from the pop-up menu.
3.
Enter the generic schema information, such as its Name and Description.
4.
239
2.
240
3. 4. 5.
Enter the Encoding type in the corresponding field if the system does not detect it automatically. In the Limit field, enter the number of columns on which the XPath query is to be executed, or enter 0 if you want it to be run against all of the columns. In the Output File field, in the Output File Path zone, browse to or enter the path to the output file. If the file does not exist as yet, it will be created during the execution of a Job using a tAdvancedFileOutputXML component. If the file already exists, it will be overwritten. Click Next to define the schema.
6.
To create the output XML schema from an XSD file, do the following: 1. 2. Select the Create from a file option. Click the Browse... button next to the XML or XSD File field, browse to the access path to the XSD file the structure of which is to be applied to the output file, and double-click the file.
241
3.
In the dialog box the appears, select an element from the Root list as the root of your XML tree, and click OK.
The File Viewer area displays a preview of the XML structure, and the File Content area displays a maximum of the first 50 rows of the file.
4. 5.
Enter the Encoding type in the corresponding field if the system does not detect it automatically. In the Limit field, enter the number of columns on which the XPath query is to be executed, or enter 0 if you want it to be run against all of the columns.
242
6.
In the Output File field, in the Output File Path zone, browse to or enter the path to the output file. If the file does not exist as yet, it will be created during the execution of a Job using a tAdvancedFileOutputXML component. If the file already exists, it will be overwritten. Click Next to define the schema.
7.
Create an attribute for an In the Linker Target area, element Right-click the element of interest and select Add Attribute from the contextual menu, enter a name for the attribute in the dialog box that appears, and click OK, or Select the element of interest, click the button at the bottom, select Create as attribute in the dialog box that appears, and click OK. Then, enter a name for the attribute in the next dialog box and click OK.
Create a name space for an In the Linker Target area, element Right-click the element of interest and select Add Name Space from the contextual menu, enter a name for the name space in the dialog box that appears, and click OK, or Select the element of interest, click the button at the bottom, select Create as name space in the dialog box that appears, and click OK. Then, enter a name for the name space in the next dialog box and click OK.
243
To...
Delete one or more In the Linker Target area, elements/attributes/name Right-click the element(s)/attribute(s)/name space(s) of interest and select Delete spaces from the contextual menu, or Select the element(s)/attribute(s)/name space(s) of interest and click the button at the bottom, or
Select the element(s)/attribute(s)/name space(s) of interest and press the Delete key. Deleting an element will also delete its children, if any. Adjust the order of one or In the Linker Target area, select the element(s) of interest and click the more elements buttons. Set a static value for In the Linker Target area, right-click the element/attribute/name space of interest an element/attribute/name and select Set A Fix Value from the contextual menu. space The value you set will replace any value retrieved for the corresponding column from the incoming data flow in your Job. You can set a static value for a child element of the loop element only, on the condition that the element does not have its own children and does not have a source-target mapping on it. Create a mapping source-target Select the column of interest in the Linker Source area, drop it onto the node of interest in the Linker Target area, and select Create as sub-element of target node, Create as attribute of target node, or Add linker to target node according to your need in the dialog box that appears, and click OK. If you choose an option that is not permitted for the target node, you will see a warning message and your operation will fail. Remove a mapping source-target In the Linker Target area, right-click the node of interest and select Disconnect Linker from the contextual menu. and
Create an XML tree from Right-click any schema item in the Linker Target area and select Import XML another XML or XSD file Tree from the contextual menu to load another XML or XSD file. Then, you need to create source-target mappings manually and define the output schema all again. You can select and drop several fields at a time, using the Ctrl + Shift technique to make multiple selections, therefore making mapping faster. You can also make multiple selections for right-click operations. In this example, we base the output schema on the loaded customer.xml, define a loop to run on the customer element, and define the id node as a child element, rather than an attribute as in the loaded file. To do so: 1. In the Linker Target area, right-click the customer element and select Set As Loop Element from the contextual menu.
244
2.
Right-click the id note and select Delete from the contextual menu.
3.
Select the id column in the Linker Source area, and drop it onto the customer element in the Linker Target area. The [Selection] dialog box appears, asking you to define the source-target relationship.
245
4.
Select Create as sub-element of target node and click OK to validate the choice. A blue arrow line links the columns mapped.
5.
246
You can change the metadata in the Name field (metadata,by default), add a Comment in the corresponding field and make further modifications using the toolbar, for example: Add or delete a column using the and buttons. and buttons.
Click Finish to finalize creation of the XML output file. The new schema is displayed under the corresponding File XML node in the Repository tree view.
247
Then proceed the same way as for the file delimited connection.
248
The File viewer and sheets setting area shows a preview of the file. In the Set sheets parameters list, select the dialog box next to the sheet you want to upload. By default the file preview displays the Sheet1. Select another sheet on the drop-down list, if need be and check that the file is properly read on the preview table. Click Next to continue.
249
The same way as for the File Delimited schema procedure, you can set precisely the separator, the rows that should be skipped as they are header or footer.
You can fill out the First column and Last column fields, to set precisely the columns to be read in the file. You can need to skip column A for example as it may not contain proper data to be processed.
Also check the Set heading row as column names to take into account the heading names. Make sure you press Refresh to be able to view the change in the preview table. Then click Next to continue.
250
Click Finish. The new schema is displayed under the relevant File Excel connection node in the Repository tree view.
Proceed the same way as for other file connections. Right-click Metadata in the Repository tree view and select Create file Ldif. Make sure that you installed the relevant module as described in the Installation guide. For more information, check out http://talendforge.org/wiki/doku.php.
251
The connection functionality to a remote server is not operational yet for the LDIF file collection. The File viewer provides a preview of the files first 50 rows.
252
Click Refresh Preview to include the selected attributes into the file preview. DN is omitted in the list of attributes as this key attribute is automatically included in the file preview, hence in the generated schema.
253
Unlike the DB connection wizard, the LDAP wizard gathers both file connection and schema definition in a fourstep procedure.
Then check your connection using Check Network Parameter to verify the connection and activate the Next button.
Description LDAP Server IP address Listening port to the LDAP directory LDAP : no encryption is used LDAPS: secured LDAP TLS: certificate is used
254
Click Check authentication to verify your access rights. Field Authentication method Description Simple authentication: requires Authentication Parameters field to be filled in Anonymous authentication: does not require authentication parameters Authentication Parameters Bind DN or User: login as expected by the LDAP authentication method Bind password: expected password Save password: remembers the login details. Get Base DN from Root DSE / Base Path to users authorized tree leaf DN Fetch Base DNs button retrieves the DN automatically from Root. Alias Dereferencing Never allows to improve search performance if you are sure that no aliases is to be dereferenced. By default, Always is to be used. Always: Always dereference aliases
255
Field
Description Never: Never dereferences aliases. Searching:Dereferences aliases only after name resolution. Finding: Dereferences aliases only during name resolution
Referral Handling
Redirection of user request: Ignore: does not handle request redirections Follow:does handle request redirections
Limit
Click Fetch Base DNs to retrieve the DN and click the Next button to continue.
256
Click Refresh Preview to display the selected column and a sample of the data. Then click Next to continue.
If the LDAP directory which the schema is based on has changed, use the Guess button to generate again the schema. Note that if you customized the schema, your changes will not be retained after the Guess operation. Click Finish. The new schema is displayed under the relevant LDAP connection node in the Repository tree view.
257
Then proceed the same way as for any other metadata connection.
258
1.
Enter your User name and Password in the corresponding fields to connect to your Salesforce account through the salesforce webservice. Click Check login to verify that you can connect without issue. Click Finish to close the wizard.
2. 3.
In the Repository tree view, expand the Connection node, right-click the connection you set up in Step 2, and select Retrieve Salesforce Modules in the pop-up menu. In the Select Schema to create area, you can narrow down the selection by filtering schemas displayed. Type in the schema you want to filter on in Name Filter field. To retrieve more modules, select the check boxes for respective schema.
259
Click Check Connection to verify the creation status and click Finish to save the modules you retrieved.
260
Right-click the module you retrieved in Step 3, and select Retrieve Salesforce Schemas in the pop-up menu.
In the Browse data column and set query condition area, you can narrow down the selection by filtering displayed data. To do that, type in the column you want to filter on and the value you want to drill down to in the Query Condition field. The columns in the Column list are listed in alphabetical order. Clear the order the fields check box to list them randomly.
Click Refresh Preview, if you entered a query condition, so that the preview gets updated accordingly. By default, the preview shows all columns of the selected module. Then click Next to continue.
261
You can add a customized name (by default, metadata) and edit the schema using the tool bar.
You can also retrieve or update the Salesforce schema by clicking Guess. Note however, that any changes or customization of the schema might be lost using this feature. Click Finish. The new schema is displayed in the Repository tree view under the relevant File Excel connection node. You can drop the metadata defined from the Repository onto the design workspace. A dialog box opens in which you can choose to use either a tSalesforceInput or tSalesforceOutput component in your Job.
262
For more information about dropping component metadata in the design workspace, see Section 4.2.2.2, How to drop components from the Metadata node.
263
3.
Fill in the connection properties such as Name, Purpose and Description. The Status field is a customized field that can be defined. For more information, see Section 2.6.8, Status settings.
264
4.
5.
Fill in the connection details including the authentication information to the MDM server and then click Check to check the connection you have created. A dialog box displays to confirm if your connection is successful.
6.
Click OK to close the confirm dialog box and then Next to proceed to the next step.
7. 8. 9.
From the Version list, select the master data version on the MDM server to which you want to connect. From the Data-Model list, select the data model against which the master data is validated. From the Data-Container list, select the data container that holds the master data you want to access.
10. Click Finish to validate your changes and close the dialog box. The newly created connection is listed under Talend MDM under the Metadata folder in the Repository tree view.
265
You need now to retrieve the XML schema of the business entities linked to this MDM connection.
2.
266
3.
Select the Input MDM option in order to download an input XML schema and then click Next to proceed to the following step.
4.
From the Entities field, select the business entity (XML schema) from which you want to retrieve values. The name displays automatically in the Name field. You are free to enter any text in this field, although you would likely put the name of the entity from which you are retrieving the schema.
5.
267
The schema of the entity you selected automatically display in the Source Schema panel. Here, you can set the parameters to be taken into account for the XML schema definition. The schema dialog box is divided into four different panels as the following:
Description Tree view of the uploaded entity. Extraction and iteration information. Target schema preview. Raw data viewer.
In the Xpath loop expression area, enter the absolute XPath expression leading to the XML structure node on which to apply the iteration. Or, drop the node from the source schema to the target schema Xpath field. This link is orange in color.
268
The Xpath loop expression field is compulsory. 7. If required, define a Loop limit to restrict the iteration to a number of nodes.
In the capture above, we use Features as the element to loop on because it is repeated within the Product entity as follows: <Product> <Id>1</Id> <Name>Cup</Name> <Description/> <Features> <Feature>Color red</Feature> <Feature>Size maxi</Feature <Features> ... </Product> <Product> <Id>2</Id> <Name>Cup</Name> <Description/> <Features> <Feature>Color blue</Feature> <Feature>Thermos</Feature> <Features> ... </Product> By doing so, the tMDMInput component that uses this MDM connection will create a new row for every item with different feature. 8. To define the fields to extract, drop the relevant node from the source schema to the Relative or absolute XPath expression field.
269
Use the [+] sign to add rows to the table and select as many fields to extract as necessary. Press the Ctrl or the Shift keys for multiple selection of grouped or separate nodes and drop them to the table. 9. If required, enter a name to each of the retrieved columns in the Column name field. You can prioritize the order of the fields to extract by selecting the field and using the up and down arrows. The link of the selected field is blue, and all other links are grey. 10. Click Finish to validate your modifications and close the dialog box. The newly created schema is listed under the corresponding MDM connection in the Repository tree view.
To modify the created schema, complete the following: 1. In the Repository tree view, expand Metadata and Talend MDM and then browse to the schema you want to modify. Right-click the schema name and select Edit Entity from the contextual menu. A dialog box displays.
2.
270
3.
Modify the schema as needed. You can change the name of the schema according to your needs, you can also customize the schema structure in the schema panel. The tool bar allows you to add, remove or move columns in your schema.
4.
Click Finish to close the dialog box. The MDM input connection (tMDMInput) is now ready to be dropped in any of your Jobs.
2.
271
3.
Select the Output MDM option in order to define an output XML schema and then click Next to proceed to the following step.
4.
From the Entities field, select the business entity (XML schema) in which you want to write values. The name displays automatically in the Name field. You are free to enter any text in this field, although you would likely put the name of the entity from which you are retrieving the schema.
5.
272
Identical schema of the entity you selected is automatically created in the Linker Target panel, and columns are automatically mapped from the source to the target panels. The wizard automatically defines the item Id as the looping element. You can always select to loop on another element. Here, you can set the parameters to be taken into account for the XML schema definition. 6. 7. Click Schema Management to display a dialog box. Do necessary modifications to define the XML schema you want to write in the selected entity.
Your Linker Source schema must corresponds to the Linker Target schema, i.e define the elements in which you want to write values.
273
8.
Click OK to close the dialog box. The defined schema displays under Schema list.
9.
In the Linker Target panel, right-click the element you want to define as a loop element and select Set as loop element. This will restrict the iteration to one or more nodes. By doing so, the tMDMOutput component that uses this MDM connection will create a new row for every item with different feature. You can prioritize the order of the fields to write by selecting the field and using the up and down arrows.
10. Click Finish to validate your modifications and close the dialog box. The newly created schema is listed under the corresponding MDM connection in the Repository tree view.
To modify the created schema, complete the following: 274 Talend Open Studio for Data Integration User Guide
1.
In the Repository tree view, expand Metadata and Talend MDM and then browse to the schema you want to modify. Right-click the schema name and select Edit Entity from the contextual menu. A dialog box displays.
2.
3.
Modify the schema as needed. You can change the name of the schema according to your needs, you can also customize the schema structure in the schema panel. The tool bar allows you to add, remove or move columns in your schema.
4.
Click Finish to close the dialog box. The MDM output connection (tMDMOutput) is now ready to be dropped in any of your Jobs.
2.
275
3.
Select the Receive MDM option in order to define a receive XML schema and then click Next to proceed to the following step.
4.
From the Entities field, select the business entity (XML schema) according to which you want to receive the XML schema. The name displays automatically in the Name field. You can enter any text in this field, although you would likely put the name of the entity according to which you want to receive the XML schema.
5.
276
The schema of the entity you selected display in the Source Schema panel. Here, you can set the parameters to be taken into account for the XML schema definition. The schema dialog box is divided into four different panels as the following:
Description Tree view of the uploaded entity. Extraction and iteration information. Target schema preview. Raw data viewer.
In the Xpath loop expression area, enter the absolute XPath expression leading to the XML structure node on which to apply the iteration. Or, drop the node from the source schema to the target schema Xpath field. This link is orange in color.
277
The Xpath loop expression field is compulsory. 7. If required, define a Loop limit to restrict the iteration to one or more nodes.
In the above capture, we use Features as the element to loop on because it is repeated within the Product entity as the following: <Product> <Id>1</Id> <Name>Cup</Name> <Description/> <Features> <Feature>Color red</Feature> <Feature>Size maxi</Feature <Features> ... </Product> <Product> <Id>2</Id> <Name>Cup</Name> <Description/> <Features> <Feature>Color blue</Feature> <Feature>Thermos</Feature> <Features> ... </Product> By doing so, the tMDMReceive component that uses this MDM connection will create a new row for every item with different feature. 8. To define the fields to receive, drop the relevant node from the source schema to the Relative or absolute XPath expression field.
278
Use the plus sign to add rows to the table and select as many fields to extract as necessary. Press the Ctrl or the Shift keys for multiple selection of grouped or separate nodes and drop them to the table. 9. If required, enter a name to each of the received columns in the Column name field. You can prioritize the order of the fields you want to receive by selecting the field and using the up and down arrows. The link of the selected field is blue, and all other links are grey. 10. Click Finish to validate your modifications and close the dialog box. The newly created schema is listed under the corresponding MDM connection in the Repository tree view.
To modify the created schema, complete the following: 1. In the Repository tree view, expand Metadata and Talend MDM and then browse to the schema you want to modify. Right-click the schema name and select Edit Entity from the contextual menu. A dialog box displays.
2.
279
3.
Modify the schema as needed. You can change the name of the schema according to your needs, you can also customize the schema structure in the schema panel. The tool bar allows you to add, remove or move columns in your schema.
4.
Click Finish to close the dialog box. The MDM receive connection (tMDMReceive) is now ready to be dropped in any of your Jobs.
280
3.
Enter the generic schema information such as its Name and Description.
4.
281
In the Web Service Parameter zone: 1. 2. Enter the URI which will transmit the desired values, in the Web Service field. If necessary, select the Need authentication? check box and then enter your authentication information in the User and Password fields. If you use an http proxy, select the Use http proxy check box and enter the information required in the host, Port, user and password fields. Enter the Method name in the corresponding field. In the Value table, Add or Remove values as desired, using the corresponding buttons. Click Refresh Preview to check that the parameters have been entered correctly.
3.
4. 5. 6.
In the Preview tab, the values to be transmitted by the Web Service method are displayed, based the parameters entered.
282
1. 2.
and
The new schema is added to the Repository under the Web Service node.
283
Step 2: Connection
2.
Right-click FTP and select Create FTP from the context menu. The connection wizard opens:
3.
Enter the generic schema information such as its Name and Description. The status field is a customized field which can be defined in the Preferences dialog box (Window > Preferences). For further information about setting preferences, see Section 2.5, Setting Talend Open Studio for Data Integration preferences.
4.
When you have finished, click Next to enter the FTP server connection information.
284
Step 2: Connection
2. 3. 4. 5.
In the Host field, enter the name of your FTP server host. Enter the Port number in the corresponding field. Select the Encoding type from the list. From the Connection Model list, select the connection model you want to use: Select Passive if you want the FTP server to choose the port connection to be used for data transfer. Select Active if you want to choose the port yourself.
6.
In the Parameter area, select a setting for FTP server usage. For standard usage, there is no need to select an option. Select the SFTP Support check box to use the SSH security protocol to protect server communications. An Authentication method appears. Select Public key or Password according to what you use. Select the FTPs Support check box to protect server communication with the SSL security protocol. Select the Use Socks Proxy check box if you want to use this option, then enter the proxy information (the host name, port number, username and password).
7.
All of the connections created appear under the FTP server connection node, in the Repository view. You can drop the connection metadata from the Repository onto the design workspace. A dialog box opens in which you can choose the component to be used in your Job. For further information about how to drop metadata onto the workspace, see Section 4.2.2.2, How to drop components from the Metadata node. Talend Open Studio for Data Integration User Guide 285
6.
286
288
Each class or category in the system folder contains several routines or functions. Double-click the class that you want to open. All of the routines or functions within a class are composed of some descriptive text, followed by the corresponding Java code. In the Routines view, you can use the scrollbar to browse the different routines. Or alternatively: 1. Press Ctrl+O in the routines view. A dialog box displays a list of the different routines in the category. 2. Click the routine of interest. The view jumps to the section comprising the routines descriptive text and corresponding code. The syntax of routine call statements is case sensitive. For more information about the most common Java routines, see Appendix C, System routines.
2.
3. 4.
5. 6. 7.
In the workspace, select all or part of the code and copy it using Ctrl+C. Click the tab to access your user routine and paste the code by pressing Ctrl+V. Modify the code as required and press Ctrl+S to save it.
We advise you to use the descriptive text (in blue) to detail the input and output parameters. This will make your routines easier to maintain and reuse.
290
2. 3. 4.
Right-click Routines and select Create routine. The [New routine] dialog box opens. Enter the information required to create the routine, ie., its name, description... Click Finish to proceed to the next step.
The newly created routine appears in the Repository tree view, directly below the Routines node. The routine editor opens to reveal a model routine which contains a simple example, by default, comprising descriptive text in blue, followed by the corresponding code. We advise you to add a very detailed description of the routine. The description should generally include the input and output parameters you would expect to use in the routine, as well as the results returned along with an example. This information tends to be useful for collaborative work and the maintenance of the routines. The following example of code is provided by default:
291
public static void helloExample(String message) { if (message == null) { message = "World"; //$NON-NLS-1$ } System.out.println("Hello " + message + " !"); 5. Modify or replace the model with your own code and press Ctrl+S to save the routine. Otherwise, the routine is saved automatically when you close it. You can copy all or part of a system routine or class and use it in a user routine by using the Ctrl+C and Ctrl+V commands, then adapt the code according to your needs. For further information about how to customize routines, see Section 8.3, Customizing the system routines.
If you want to reuse a system routine for your own specific needs, see Section 8.3, Customizing the system routines.
292
3.
Click New to open a new dialog box where you can import the external library. You can delete any of the already imported routine files if you select the file in the Library File list and click the Remove button.
4. 5. 6.
Enter the name of the library file in the Input a librarys name field followed by the file format (.jar), or Select the Browse a library file option and click browse to set the file path in the corresponding field. If required, enter a description in the Description field and then click OK to confirm your changes. The imported library file is listed in the Library File list in the [Import External Library] dialog box.
7.
Click Finish to close the dialog box. The library file is imported into the library folder of your current Studio and also listed in the Module view of the same Studio. For more information about the Modules view, see Section 4.5.4, How to install external modules. Talend Open Studio for Data Integration User Guide 293
Alternatively, you can call any of these routines by indicating the relevant class name and the name of the routine, followed by the expected settings, in any of the Basic settings fields in the following way: <ClassName>.<RoutineName>
1.
In the Palette, click File > Management, then drop a tFileTouch component onto the workspace. This component allows you to create an empty file. Double-click the component to open its Basic settings view in the Component tab.
2.
294
3.
In the FileName field, enter the path to access your file, or click [...] and browse the directory to locate the file.
4. 5. 6. 7. 8.
Close the double inverted commas around your file extension as follows: "D:/Input/customer".txt. Add the plus symbol (+) between the closing inverted commas and the file extension. Press Ctrl+Space to open a list of all of the routines, and in the auto-completion list which appears, select TalendDate.getDate to use the Talend routine which allows you to obtain the current date. Modify the format of the date provided by default, if required. Enter the plus symbol (+) next to the getDate variable to complete the routine call, and place double inverted commas around the file extension.
If you are working on windows, the ":" between the hours and minutes and between the minutes and seconds must be removed. 9. Press F6 to run the Job. The tFileTouch component creates an empty file with the days date, retrieved upon execution of the GetDate routine called.
295
What is ELT
298
The below sections show you how to manage these two types of SQL templates.
Realizes aggregation tSQLTemplateAggregate (sum, average, count, etc.) over a set of data.
Commit
Sends a Commit tSQLTemplate instruction to RDBMS tSQLTemplateAggregate tSQLTemplateCommit tSQLTemplateFilterColumns tSQLTemplateFilterRows tSQLTemplateMerge tSQLTemplateRollback Sends a Rollback tSQLTemplate instruction to RDBMS. tSQLTemplateAggregate tSQLTemplateCommit tSQLTemplateFilterColumns tSQLTemplateFilterRows tSQLTemplateMerge tSQLTemplateRollback Removes a source table. tSQLTemplate tSQLTemplateAggregate tSQLTemplateFilterColumns tSQLTemplateFilterRows Removes a target table tSQLTemplateAggregate tSQLTemplateFilterColumns tSQLTemplateFilterRows
Null
Rollback
Null
DropSourceTable
Table name (when use tSQLTemplate) Source table name Target table name
DropTargetTable
FilterColumns
Selects and extracts a tSQLTemplateAggregate set of data from given tSQLTemplateFilterColumns columns in RDBMS. tSQLTemplateFilterRows
Target table name (and schema) Source table name (and schema)
FilterRow
Selects and extracts a set tSQLTemplateAggregate of data from given rows tSQLTemplateFilterColumns in RDBMS. tSQLTemplateFilterRows
Target table name (and schema) Source table name (and schema) Conditions
MergeInsert
Inserts records from the tSQLTemplateMerge source table to the target tSQLTemplateCommit table.
299
Source table name (and schema) Conditions MergeUpdate Updates the target table tSQLTemplateMerge with records from the tSQLTemplateCommit source table. Target table name (and schema) Source table name (and schema) Conditions
Each folder contains system sub-folders containing pre-defined SQL statements, as well as a UserDefined folder in which you can store SQL statements that you have created or customized. Each system folder contains several types of SQL templates, each designed to accomplish a dedicated task. Apart from the Generic folder, the SQL templates are grouped into different folders according to the type of database for which they are to be used. The templates in the Generic folder are standard, for use in any database. You can use these as a basis from which you can develop more specific SQL templates than those defined in Talend Open Studio for Data Integration.
300
The system folders and their content are read only. From the Repository tree view, proceed as follows to open an SQL template: 1. 2. In the Repository tree view, expand SQL Templates and browse to the template you want to open. Double click the class that you want to open, for example, aggregate in the Generic folder. The aggregate template view displays in the workspace.
You can read the predefined aggregate statements in the template view. The parameters, such as TABLE_NAME_TARGET, operation, are to be defined when you design related Jobs. Then the parameters can be easily set in the associated components, as mentioned in the previous section. Everytime you click or open an SQL template, its corresponding property view displays at the bottom of the studio. Click the aggregate template, for example, to view its properties as presented below:
For further information regarding the different types of SQL templates, see Section 9.3.1, Types of system SQL templates For further information about how to use the SQL templates with the associated components, see Section 4.4.3, How to use the SQL Templates.
301
2.
Right-click UserDefined and select Create SQL Template to open the [SQL Templates] wizard.
3.
Enter the information required to create the template and click Finish to close the wizard.
302
The name of the newly created template appears under UserDefined in the Repository tree view. Also, an SQL template editor opens on the design workspace, where you can enter the code for the newly created template. For further information about how to create a user-defined SQL template and how to use it in a Job, see tMysqlTableList in Talend Open Studio Components Reference Guide.
2.
3.
4.
303
5. 6.
In the design workspace, select tMysqlConnection and click the Component tab to define the component basic settings. In the Basic settings view, set the database connection details manually.
7. 8.
In the design workspace, select tSQLTemplateCommit and click the Component tab to define its basic settings. On the Database type list, select the relevant database type, and from the Component list, select the relevant database connection component if more than one connection is used.
Procedure 9.2. Grouping data, writing aggregated data and dropping the source table
1. 2. 3. In the design workspace, select tSQLTemplateAggregate and click the Component tab to define its basic settings. On the Database type list, select the relevant database type, and from the Component list, select the relevant database connection component if more than one connection is used. Enter the names for the database, source table, and target table in the corresponding fields and click the [...] button next to Edit schema to define the data structure in the source and target tables.
304
The source table schema consists of three columns: First_Name, Last_Name and Country. The target table schema consists of two columns: country and total. In this example, we want to group citizens by their nationalities and count citizen number in each country. To do that, we define the Operations and Group by parameters accordingly.
4.
In the Operations table, click the plus button to add one or more lines and then click in the Output column line to select the output column that will hold the counted data. Click in the Function line and select the operation to be carried on. In the Group by table, click the plus button to add one or more lines and then click in the Output column line to select the output column that will hold the aggregated data. Click the SQL template tab to open the corresponding view.
5. 6.
7.
305
8. 9.
Click the plus button twice under the SQL template list table to add two SQL templates. Click on the first SQL template row and select the MySQLAggregate template from the drop-down list. This template generates codes to aggregate data according to the configuration in the Basic settings view.
10. Do the same to select the MySQLDropSourceTable template for the second SQL template row. This template generates codes to delete the source table where the data to be aggregated comes from. To add new SQL templates to an ELT component for execution, you can simply drop the templates of your choice either onto the component in the design workspace, or onto the components SQL template list table. The templates set up in the SQL template list table have priority over the parameters set in the Basic settings view and are executed in a top-down order. So in this use case, if you select MySQLDropSourceTable from the list, the source table will be deleted prior to aggregation, meaning that nothing will be aggregated.
Procedure 9.3. Reading the target database and listing the Job execution result
1. In the design workspace, select tMysqlInput, and click the Component tab to define its basic settings.
306
2. 3. 4. 5. 6.
Select the Use an existing connection check box to use the database connection that you have defined on the tMysqlConnection component. To define the schema, select Repository and then click the three-dot button to choose the database table whose schema is used. In this example, the target table holding the aggregated data is selected. In the Table Name field, type in the name of the table you want to query. In this example, the table is the one holding the aggregated data. In the Query area, enter the query statement to select the columns to be displayed. Save your Job and press F6 to execute it. The source table is deleted.
A two-column table citizencount is created in the database. It groups citizens according to their nationalities and gives their total count in each country.
307
Appendix A. GUI
This appendix describes the Graphical User Interfaces (GUI) of Talend Open Studio for Data Integration.
Main window
The various panels and their respective features are detailed hereafter.
310
All the panels, tabs, and views described in this documentation are specific to Talend Open Studio for Data Integration. Some views listed in the [Show View] dialog box are Eclipse specific and are not subjects of this documentation. For information on such views, check Eclipse online documentation at http://www.eclipse.org/documentation/.
Edit project Opens a dialog box where you can customize the settings of the current properties project. For more information, see Section 2.6, Customizing project settings Import Export Opens a wizard that helps you to import different types of resources (files, items, preferences, XML catalogs, etc.) from different sources. Opens a wizard that helps you to export different types of resources (files, items, preferences, breakpoints, XML catalogs, etc.) to different destinations. Closes The Studio main window. Opens a file from within the Studio. Undoes the last action done in the Studio design workspace.
311
Menu
Description Redoes the last action done in the Studio design workspace. Cuts selected object in the Studio design workspace. Copies the selected object in the Studio design workspace. Pastes the previously copied object in the Studio design workspace. Deletes the selected object in the Studio design workspace. Selects all components present in the Studio design workspace. Obtains a larger image of the open Job. Obtains a smaller image of the open Job. Displays grid in the design workspace. All items in the open Job are snapped to it. to Enables the Snap to Geometry feature.
View
Window
Opens different perspectives corresponding to the different items in the list. Opens the [Show View] dialog box which enables you to display different views on the Studio.
Maximize Active Maximizes the current perspective. View or Editor... Preferences Opens the [Preferences] dialog box which enables you to set your preferences. For more information about preferences, see Section 2.5, Setting Talend Open Studio for Data Integration preferences Help Welcome Help Contents Opens a welcoming page which has links to Talend Open Studio for Data Integration documentation and Talend practical sites. Opens the Eclipse help system documentation.
About Talend Displays: Open Studio for Data Integration -the software version you are using, -detailed information on your software configuration that may be useful if there is a problem, -detailed information about plug-in(s), -detailed information about Talend Open Studio for Data Integration features. Export logs Software Updates Opens a wizard that helps you to export all logs generated in the Studio and system configuration information to an archived file. Find and Install...: Opens the [Install/Update] wizard that helps searching for updates for the currently installed features, or searching for new features to install. Manage Configuration...: Opens the [Product Configuration] dialog box where you can manage Talend Open Studio for Data Integration configuration.
312
Import items
Project settings
313
allows you to update the tree view with the last changes made.
allows you to open the filter settings view so as to configure the display of
The Repository tree view stores all your data (Business, Jobs) and metadata (Routines, DB/File connections, any meaningful Documentation and so on). The table below describes the nodes in the Repository tree view. Node Business Models Description Under the Business Models folder, are grouped all business models of the project. Double-click the name of the model to open it on the design workspace. For more information, see Chapter 3, Designing a Business Model. The Job Designs folder shows the tree view of the designed Jobs for the current project. Double-click the name of the Job to open it on the design workspace. For more information, see Chapter 4, Designing a data integration Job. The Context folder groups files holding the contextual variables that you want to reuse in various Jobs, such as filepaths or DB connection details. For more information, see Section 4.4.2, How to centralize contexts and variables. The Code folder is a library that groups the routines available for this project and other pieces of code that could be reused in the project. Click the relevant tree entry to expand the appropriate code piece. For more information, see Chapter 4, Designing a data integration Job. SQL Templates The SQL Templates folder groups all system SQL templates and gives the possibility to create user-defined SQL templates. For more information, see Section 4.4.3, How to use the SQL Templates. The Metadata folder bundles files holding redundant information you want to reuse in various Jobs, such as schemas and property data. For more information, see Chapter 4, Designing a data integration Job. The Documentation folder gathers all types of documents, of any format.This could be, for example, specification documents or a description of technical format of a file. Double-click to open the document in the relevant application. For more information, see Section 5.6.1, How to generate HTML documentation. The Recycle bin groups all elements deleted from any folder in the Repository tree view. The deleted elements are still present on your file system, in the recycle bin, until you right-click the recycle bin icon and select Empty Recycle bin.
Job Designs
Contexts
Code
Metadata
Documentation
Recycle bin
314
Design workspace
Node
Description Expand the recycle bin to view any folders, subfolders or elements held within. You can action an element directly from the recycle bin, restore it or delete it forever by clicking right and selecting the desired action from the list.
APalette is docked at the top of the design workspace to help you draw the model corresponding to your workflow needs.
A.5. Palette
From the Palette, depending on whether you are designing a Job or modeling a Business Model, you can drop technical components or shapes, branches and notes to the design workspace. You can define and format them using the various tools offered in the Business Model view for the Business Models and in the Component view for the Job.
315
Configuration tabs
Related topics: Chapter 3, Designing a Business Model. Chapter 4, Designing a data integration Job. Section 4.2.8.1, How to change the Palette layout and settings.
The Component, Run Jobs, Problems and Error Log views gather all information relative to the graphical elements selected in the design workspace or the actual execution of the open Job. The Modules and Scheduler tabs are located in the same tab system as the Component, Logs and Run Job tabs. Both views are independent from the active or inactive Jobs open on the design workspace. You can show more tabs in this tab system and directly open the corresponding view if you select Window > Show view and then, in the open dialog box, expand any node and select the element you want to display. The sections below describe the view of each of the configuration tabs. View Component Description This view details the parameters specific to each component of the Palette. To build a Job that will function, you are required to fill out the necessary fields of this Component view for each component forming your Job. For more information about the Component view, see Section 4.2.6, How to define component properties. Run Job This view obviously shows the current job execution. It becomes a log console at the end of an execution.
316
Configuration tabs
Description For details about job execution, see Section 4.2.7, How to run a Job. This view is mainly used for Job execution errors. It shows the history of warnings or errors occurring during job executions. The log tab has also an informative function for a Java component operating progress, for example. Error Log tab is hidden by default. As for any other view, go to Window > Show views, then expand PDE Runtime node and select Error Log to display it on the tab system.
Modules
This view shows if a module is necessary and required for the use of a referenced component. Checking the Modules view helps to verify what modules you have or should have to run smoothly your Jobs. For more information, see Section 4.5.4, How to install external modules.
Scheduler
This view enables you to schedule a task that will launch periodically the Job you select via the crontab program. For more information, see Section 4.5.5, How to launch a Job periodically.
Job view
The Job view displays various information related to the open Job on the design workspace. This view has the following tabs: Main tab This tab displays basic information about the Job opened on the design workspace, i.e. its name, author, version number, etc. The information is read-only. To edit it you have to close your Job, right-click its label on the Repository tree view and click Edit properties on the drop-down list. Extra tab This tab displays extra parameters including multi thread and implicit context loading features. For more information, see Section 4.6.7.2, How to use the features in the Extra tab Stats/Log tab This tab allows you to enable/disable the statistics and logs for the whole Job. You can already enable these features for every single component of your Job by simply using and setting the relevant components: tFlowMeterCatcher, tStatCatcher, tLogCatcher. For more information about these components, see Talend Open Studio Components Reference Guide. In addition, you can now set these features for the whole active Job (i.e. all components of your Job) in one go, without using the Catcher components mentioned above. This way, all components get tracked and logged in the File or Database table according to your setting. You can also save the current setting to Project Settings by clicking the button. For more details about the Stats & Logs automation, see Section 4.6.7.1, How to automate the use of statistics & logs. Version tab This tab displays the different versions of the Job opened on the design workspace and their creation and modification dates.
317
View Problems
Description This view displays the messages linked to the icons docked at a components in case of problem, for example when part of its setting is missing. Three types of icons/messages exist: Error, Warning and Infos. For more information, see Section 4.6.3.1, Warnings and error icons on components.
Job Hierarchy
This view displays a tree folder showing the child Job(s) of the parent Job selected. To show this view, right-click the parent Job in the Repository tree view and select Open Job Hierarchy on the drop-down list. You can also show this view in the Window > Show view... combination where you can select Talend > Job Hierarchy. You can see Job Hierarchy only if you create a parent Job and one or more child Job(s) via the tRunJob component. For more information about tRunJob, see Talend Open Studio Components Reference Guide.
Properties
When inserting a shape in the design workspace, the Properties view offers a range of formatting tools to help you customizing your business model and improve its readability.
Runs current Job or shows Run Job view if no Job is Global application open. Shows Module view. Shows Problems view. Shows the Designer view of the current Job. Shows the Code view of the current Job. Restores the initial Repository view. Synchronizes components javajet components. Opens a Job. Switches to Debug mode. Refreshes the Repository view. Global application Global application Global application Global application From Repository view Global application Global application (In Windows) From Run Job view From Repository view
318
Operation Kills current Job. Refreshes Modules install status. Execute SQL queries.
Context From Run Job view From Modules view Talend commands (in Windows)
Access global and user-defined variables. It can be From any component field in Job or error messages or line number for example, depending Component views on the component selected.
319
322
The County column is fed with the name of the County where the city is located using a reference file which will help filtering Orange and Los Angeles counties cities.
323
To the right: the Palette of business or technical components depending on the software tool you are using within Talend Open Studio for Data Integration. To the left of the Studio, the Repository tree view that gives an access to: The Business Modeler: For more information, see Section 3.3, Modeling a Business Model. The Job Designer: For details about this part, see Section 4.2, Getting started with a basic Job design. The Metadata Manager: For details about this part, see Section 4.4.1, How to centralize the Metadata items. Contexts and routines: For details, see Section 4.4, Using the Metadata Manager To create the Job, right-click Job Designs in the Repository tree view and select Create Job. In the dialog box displaying then, only the first field (Name) is required. Type in California1 and click Finish. An empty Job then opens on the main window and the Palette of technical components (by default, to the right of the Studio) comes up showing a dozen of component families such as: Databases, Files, Internet, Data Quality and so on, hundreds of components are already available. To read the file California_Clients, lets use the tFileInputDelimited component. This component can be found in the File/Input group of the Palette. Click this component then click to the left of the design workspace to place it on the design area. Lets define now the reading properties for this component: File path, column delimiter, encoding... To do so, lets use the Metadata Manager. This tool offers numerous wizards that will help us to configure parameters and allow us to store these properties for a one-click re-use in all future Jobs we may need. As our input file is a delimited flat file, lets select File Delimited on the right-click list of the Metadata folder in the Repository tree view. Then select Create file delimited. A wizard dedicated to delimited file thus displays: At Step 1, only the Name field is required: simply type in California_clients and go to the next Step. At Step 2, select the input file (California_Clients.csv) via the Browse... button. Immediately an extract of the file shows on the Preview, at the bottom of the screen so that you can check its content. Click Next. At Step 3, we will define the file parameters: file encoding, line and column delimiters... As our input file is pretty standard, most default values are fine. The first line of our file is a header containing column names. To retrieve automatically these names, click Set heading row as column names then click Refresh Preview. And click Next to the last step. At Step 4, each column of the file is to be set. The wizard includes algorithms which guess types and length of the column based on the file first data rows. The suggested data description (called schema in Talend Open Studio for Data Integration) can be modified at any time. In this particular scenario, they can be used as is. There you go, the California_clients metadata is complete! We can now use it in our input component. Select the tFileInputDelimited you had dropped on the design workspace earlier, and select the Component view at the bottom of the window. Select the vertical tab Basic settings. In this tab, youll find all technical properties required to let the component work. Rather than setting each one of these properties, lets use the Metadata entry we just defined. Select Repository as Property type in the list. A new field shows: Repository, click ... button and select the relevant Metadata entry on the list: California_clients. You can notice now that all parameters get automatically filled out.
324
At this stage, we will terminate our flow by simply sending the data read from this input file onto the standard output (StdOut). To do so, add a tLogRow component (from the Logs & Errors group). To link both components, right-click the input component and select Row/Main. Then click the output component: tLogRow. This Job is now ready to be executed. To run it, select the Run tab on the bottom panel. Enable the statistics by selecting the Statistics check box in the Advanced Settings vertical tab of the Run view, then run the Job by clicking Run in the Basic Run tab.
325
The content of the input file display thus onto the console.
326
To the left, you can see the schema (description) of your input file (row1). To the right, your output is for the time being still empty (out1). Drop the Firstname and Lastname columns to the right, onto the Name column as shown on the screen below. Then drop the other columns Address and City to their respective line.
Then carry out the following transformations on each column: Change the Expression of the Name column to row1.Firstname + " " + row1.LastName. Concatenate the Firstname column with the Lastname column following strictly this syntax (in Java), in order for the columns to display together in one column. Change the Expression of the Address column to row1.Address.toUpperCase() which will thus change the address case to upper case. Then remove the Lastname column from the out1 table and increase the length of the remaining columns. To do so, go to the Schema Editor located at the bottom of the tMap editor and proceed as follows:
1. 2. 3.
Select the column to be removed from the schema, and click the cross icon. Select the column of which you need increase the length size. Type in the length size you intend in the length column. In this example, change the length of every remaining column to 40. As the first name and the last name of a client is concatenated, it is necessary to increase the length of the name column in order to match the full name size.
No transformation is made onto the City column. Click OK to validate the changes and close the Map editor interface. If you run your Job at this stage (via the Run view as we did it before), youll notice the changes that you defined are implemented.
327
For example, the addresses are displayed in upper case and the first names and last names are gathered together in the same column.
B.1.2.3. Step 3: Reference file definition, re-mapping, inner join mode selection
Define the Metadata corresponding to the LosAngelesandOrangeCounties.txt file just the way we did it previously for California_clients file, using the wizard. At Step1 of the wizard, name this metadata entry: LA_Orange_cities. Then drop this newly created metadata to the top of the design area to create automatically a reading component pointing to this metadata. Then link this component to the tMap component.
328
Double-click again on the tMap component to open its interface. Note that the reference input table (row2) corresponding to the LA and Orange county file, shows to the left of the window, right under your main input (row1). Now lets define the join between the main flow and the reference flow. In this use case, the join is pretty basic to define as the City column is present in both files and the data match perfectly. But even though this was not the case, we could have carried out operations directly at this level to establish a link among the data (padding, case change...) To implement the join, drop the City column from your first input table onto the City column of your reference table. A violet link then displays, to materialize this join.
Now, we are able to use the County column from the reference table in the output table (out1).
329
Eventually, click the OK button to validate your changes, and run the new Job. The following output should display on the console.
As you can notice, the last column is only filled out for Los Angeles and Orange counties cities. For all other lines, this column is empty. The reason for this is that by default, the tMap implements a left outer join mode. If you want to filter your data to only display lines for which a match is found by the tMap, then open again the tMap, click the tMap settings button and select the Inner Join in the Join Model list on the reference table (row2).
330
On the Basic Settings tab of this component: 1. 2. 3. Type in LA_Orange_Clients in the Table field, in order to name your target table which will get created on the fly. Select the Drop table if exists and create option or on the Action on table field. Click Edit Schema and click the Reset DB type button (DB button on the tool bar) in order to fill out automatically the DB type if need be.
Run again the Job. The target table should be automatically created and filled with data in less a second! In this scenario, we did use only four different components out of hundreds of components available in the Palette and grouped according to different categories (databases, Web service, FTP and so on)! And more components, this time created by the community, are also available on the community site (talendforge.org). For more information regarding the components, check out Talend Open Studio Components Reference Guide.
331
id (Type: Integer) CustomerName (Type: String) CustomerAge (Type: Integer) CustomerAddress (Type: String) CustomerCity (Type: String) RegisterTime (Type: Date)
332
2. 3. 4.
Click the three-dot button next to the File name/Stream field to browse to the path of the input data file. You can also type in the path of the input data file manually. Click Edit schema to open a dialog box to configure the file structure of the input file. Click the plus button to add six columns and set the Type and columns names to what we listed in the following:
5.
B.2.2.2. Step2: Setting the command to enable the output stream feature
Now we will make use of tJava to set the command for creating an output file and a directory that contains the output file. To do so: 1. Drop a tJava component onto the design workspace, and double-click it to open the Basic settings view to set its properties.
333
2.
Fill in the Code area with the following command: new java.io.File("C:/myFolder").mkdirs(); globalMap.put("out_file",new java.io.FileOutputStream("C:/myFolder/ customerselection.txt",false)); The command we typed in this step will create a new directory C:/myFolder for saving the output file customerselection.txt which is defined followingly. You can customize the command in accordance with actual practice.
3.
Connect tJava to tFileInputDelimited using a Trigger > On Subjob Ok connection. This will trigger tJava when subjob that starts with tFileInputDelimited succeeds in running.
2. 3.
Click the three-dot button next to Map Editor to open a dialog box to set the mapping. Click the plus button on the left to add six columns for the schema of the incoming data, these columns should be the same as the following:
334
4.
Click the plus button on the right to add a schema of the outgoing data flow.
5.
Select New output and Click OK to save the output schema. For the time being, the output schema is still empty. Click the plus button beneath the out1 table to add three columns for the output data.
6.
7.
Drop the id, CustomerName and CustomerAge columns onto their respective line on the right.
335
8.
3. 4.
Connect tFileInputDelimited to tMap using a Row > Main connection and connect tMap to tFileOutputDelimited using a Row > out1 connection which is defined in the Map Editor of tMap. Click Sync columns to retrieve the schema defined in the preceding component.
336
To output the selected data to the console: 1. 2. Drop a tLogRow component onto the design workspace, and double-click it to open its Basic settings view. Select the Table radio button in the Mode area.
3. 4.
Connect tFileOutputDelimited to tLogRow using a Row > Main connection. Click Sync columns to retrieve the schema defined in the preceding component. This Job is now ready to be executed.
5.
Press CTRL+S to save your Job and press F6 to execute it. The content of the selected data is displayed on the console.
337
The selected data is also output to the specified local file customerselection.txt.
For an example of Job using this feature, see Scenario: Utilizing Output Stream in saving filtered data to a local file of tFileOutputDelimited in the Talend Open Studio Components Reference Guide. For the principle of the Use Output Stream feature, see Section 4.5.7, How to use the Use Output Stream feature.
338
Numeric Routines
Returns an incremental numeric ID. Numeric.sequence("Parameter name", start value, increment value) Creates a sequence if it doesnt exist Numeric.resetSequence (Sequence and attributes a new start value. Identifier, start value) Removes a sequence. Numeric.RemoveSequence Identifier) (Sequence start
Returns a random whole number Numeric.random(minimum between the maximum and value, maximum end value) minimum values.
convertImpliedDecimalFormat a decimal with the help of an Numeric. Returns implicit decimal model. convertImpliedDecimalFormat ("Target Format", value to converted)
be
The routine automatically converts the value entered as a parameter according to the format of the implied decimal provided:
340
Relational Routines
To check a Relational Routine, you can use the ISNULL routine, along with a tJava component, for example:
checks whether the expression is arranged StringHandling.ALPHA("string in alphabetical order. Returns the true or be checked") false boolean accordingly. checks whether the expression contains StringHandling.IS_ALPHA("string alphabetical characters only, or otherwise. to be checked") Returns the true or false boolean accordingly.
IS_ALPHA
CHANGE
replaces an element of a string with a StringHandling.CHANGE("string to defined replacement element and returns be checked", "string to be the new string. replaced","replacement string") Returns the number of times a substring StringHandling.COUNT("string occurs within a string. be checked", "substring to counted") converts all uppercase letters in an StringHandling.DOWNCASE("string expression into lowercase and returns the to be converted") new string. to be
COUNT
DOWNCASE
341
Routine UPCASE
Description
Syntax to
converts all lowercase letters in an StringHandling.UPCASE("string expression into uppercase and returns the be converted") new string.
DQUOTE
encloses an expression in double quotation StringHandling.DQUOTE("string to marks. be enclosed in double quotation marks") returns the position of the first character in StringHandling.INDEX("string to a specified substring, within a whole string. be checked", "substring If the substring specified does not exist in specified") the whole string, the value - 1 is returned. specifies a substring which corresponds to StringHandling.LEFT("string to be the first n characters in a string. checked", number of characters) specifies a substring which corresponds to StringHandling.RIGHT("chane the last n characters in a string. vrifier", number of characters) calculates the length of a string. StringHandling.LEN("string check") to of
INDEX
generates a string consisting of a specified StringHandling.SPACE(number number of blank spaces. blank spaces to be generated)
encloses an expression in single quotation StringHandling.SQUOTE("string to marks. be enclosed in single quotation marks") generates a particular character a the StringHandling.STR(character to number of times specified. be generated, number of times) deletes the spaces and tabs before the first StringHandling.TRIM("string to be non-blank character in a string and after the checked") last non-blank character, then returns the new string. deletes all the spaces and tabs after the last StringHandling.BTRIM("string non-blank character in a string and returns be checked") the new string. deletes all the spaces and tabs preceding the StringHandling.FTRIM("string first non-blank character in a string. be checked") to
STR TRIM
BTRIM
FTRIM
to
342
The routine replaces the old element with the new element specified.
C.3.4. How to check the position of a specific character or substring, within a string
The INDEX routine is easy to use along with a tJava component, to check whether a string contains a specified character or substring:
The routine returns a whole number which indicates the position of the first character specified, or indeed the first character of the substring specified. Otherwise, - 1 is returned if no occurrences are found.
343
The check returns a whole number which indicates the length of the chain, including spaces and blank characters.
The routine returns the string with the blank characters removed from the beginning.
returns a first name taken randomly TalendDataGenerator.getFirstName() from a fictitious list. returns a random surname from a TalendDataGenerator.getLastName() fictitious list. returns an address taken randomly TalendDataGenerator.getUsStreet() from a list of common American street names. returns the name of a town taken TalendDataGenerator.getUsCity() randomly from a list of American towns. returns the name of a State taken TalendDataGenerator.getUsState() randomly from a list of American States. returns an ID randomly taken from TalendDataGenerator.getUsStateId() a list of IDs attributed to American States.
getUsCity
getUsState
getUsStateId
344
You can customize the fictitious data by modifying the TalendGeneratorRoutines. For further information on how to customize routines, see Section 8.3, Customizing the system routines.
The set of data taken randomly from the list of fictitious data is displayed in the Run view:
adds n days, n months, n hours, n minutes TalendDate.addDate("String date or n seconds to a Java date and returns the initiale", "format Date - eg.: new date. yyyy/MM/dd", whole n,"format of the part of the date to which n is The Date format is: "yyyy", "MM", "dd", to be added - eg.:yyyy"). "HH", "mm", "ss" or "SSS". compares all or part of two dates according TalendDate.compareDate(Date to the format specified. Returns 0 if the date1, Date date2, "format to be dates are identical, 1 if the first date is older compared - eg.: yyyy-MM-dd") than the second and -1 if it is more recent than the second. returns the difference between two dates in TalendDate.diffDate(Date1(), terms of days, months or years according Date2(), "format of the part of the to the comparison parameter specified. date to be compared - eg.:yyyy") returns the difference between two dates TalendDate.diffDateFloor(Date1(), by floor in terms of years, months, days, Date2(), "format of the part of hours, minutes, seconds or milliseconds the date to be compared - eg.:MM")
compareDate
diffDate
diffDateFloor
345
Routine
Description Syntax according to the comparison parameter specified. returns a date string which corresponds to TalendDate.formatDate("date the format specified. format eg.: yyyy-MM-dd HH:mm:ss", Date() to be formatted
formatDate
formatDateLocalechanges a date into a date/hour string TalendDate.formatDateLocale according to the format used in the target ("format target", java.util.Date country. date, "language or country code") getCurrentDate getDate returns the current date. No entry parameter TalendDate.getCurrentDate() is required. returns the current date and hour in the TalendDate.getDate("Format of the format specified (optional). This string can string - ex: CCYY-MM-DD") contain fixed character strings or variables linked to the date. By default, the string is returned in the format, DD/MM/CCYY.
getFirstDayOfMonth changes the date of an event to the first day TalendDate.getFirstDayMonth(Date) of the current month and returns the new date. getLastDayOfMonth changes the date of an event to the last day TalendDate.getLastDayMonth(Date) of the current month and returns the new date. getPartOfDate returns part of a date according to the format specified. This string can contain fixed character strings or variables linked to the date. TalendDate.getPartOfDate("String indicating the part of the date to be retrieved, "String in the format of the date to be parsed") TalendDate.getRandomDate("format date of the character string", String minDate, String maxDate) TalendDate.isDate(Date() to be checked, "format of the date to be checked - eg.: yyyy-MM-dd HH:mm:ss")
isDate
checks whether the date string corresponds to the format specified. Returns the boolean value true or false according to the outcome.
parseDate
changes a string into a Date. Returns a date TalendDate.parseDate("format date in the standard format. of the string to be parsed", "string in the format of the date to be parsed") TalendDate.parseDateLocale("date format of the string to be parsed", "String in the format of the date to be parsed", "code corresponding to the country or language")
parseDateLocale parses a .string according to a specified format and extracts the date. Returns the date according to the local format specified. setDate
modifies part of a date according to the TalendDate.setDate(Date, whole n, part and value of the date specified and the "format of the part of the date to format specified. be modified - eg.:yyyy")
346
The current date is initialized according to the pattern specified by the new date() Java function and is displayed in the Run view:
The current date is initialized by the Java function new date()and the value -1 is displayed in the Run view to indicate that the current date precedes the reference date.
The current date, followed by the new date are displayed in the Run view:
347
In this example, the day of month (DAY_OF_MONTH), the month (MONTH), the year (YEAR), the day number of the year (DAY_OF_YEAR) and the day number of the week (DAY_OF_WEEK) are returned in the Run view. All the returned data are numeric data types.
In the Run view, the date string referring to the months (MONTH) starts with 0 and ends with 11: 0 corresponds to January, 11 corresponds to December.
348
TalendString Routines
replaceSpecialCharForXML returns a string from which TalendString.replaceSpecialCharForXML the special characters (eg.:: <, ("string containing the special >, &...) have been replaced by characters - eg.: Thelma & Louise") equivalent XML characters. checkCDATAForXML identifies characters starting TalendString.checkCDATAForXML("string with <![CDATA[ and to be parsed") ending with ]]> as pertaining to XML and returns them without modification. Transforms the strings not identified as XML in a form which is compatible with XML and returns them. parses the entry string and TalendString.talendTrim("string to removes the filler characters be parsed", "filler character to be from the start and end of removed", character position) the string according to the alignment value specified: -1 for the filler characters at the end of the string, 1 for those at the start of the string and 0 for both. Returns the trimmed string. removes accents from a string TalendString.removeAccents("String") and returns the string without the accents. generates a random string TalendString.getAsciiRandomString with a specific number of (whole number indicating the length characters. of the string)
talendTrim
removeAccents
getAsciiRandomString
349
In this example, the "&" character is replaced in order to make the string XML compatible:
The star characters are removed from the start, then the end of the string and then finally from both ends:
350
SQL statements
352
Within this syntax, the <%=...%> or </.../> syntax should not be used. <%=...%> and </.../> are also syntax intended for the SQL templates. The below sections describe related information. Parameters that the SQL templates can access with this syntax are simple. They are often used for connection purpose and can be easily defined in components, such as TABLE_NAME, DB_VERSION, SCHEMA_TYPE, etc.
353
For more information on the <%%> syntax, see Section D.3, The <%...%> syntax. For more information on the <%=%> syntax, see Section D.4, The <%=...%> syntax. The following sections present more specific code used to access more complicated parameters. Parameters that the SQL templates can access with this syntax are simple. They are often used for connection purpose and can be easily defined in components, such as TABLE_NAME, DB_VERSION, SCHEMA_TYPE, etc.
354
The </.../> approach: </.../> is one of the syntax used by the SQL templates. This approach often needs hard coding for every parameter to be extracted. For example, a new parameter is created by user and is given the name NEW_PROPERTY. If you want to access it by using </NEW_PROPERTY/>, the below code is needed. else if (paramName.equals("NEW_PROPERTY")) { List<Map<String, String>> newPropertyTableValue = (List<Map<String, String>>) ElementParameterParser.getObjectValue(node, "__NEW_PROPERTY__"); for (int ii = 0; ii <newPropertyTableValue.size(); ii++) { Map<String, String> newPropertyMap =newPropertyTableValue.get(ii); realValue += ...;//append generated codes } } The EXTRACT(__GROUPBY__); approach: The below code shows the second way to access the tabular parameter (GROUPBY). <% String query = "insert into " + __TABLE_NAME__ + "(id, name, date_birth) select sum(id), name, date_birth from cust_teradata group by"; EXTRACT(__GROUPBY__); for (int i=0; i < __GROUPBY_LENGTH__ ; i++) { query += (__GROUPBY_INPUT_COLUMN__[i] + " "); } %> <%=query %>; When coding the statements, respect the rules as follows: The extract statement must use EXTRACT(__GROUPBY__);. Upcase should be used and no space char is allowed. This statement should be used between <% and %>. Use __GROUPBY_LENGTH__, in which the parameter name is followed by _LENGTH, to get the line number of the tabular GROUPBY parameters you define in the Groupby area on a Component view. It can be used between <% and %> or <%= and %>. Use code like __GROUPBY_INPUT_COLUMN__[i] to extract the parameter values. This can be used between <% and %> or between <%= and %>. In order to access the parameter correctly, do not use the identical name prefix for several parameters. For example in the component, avoid to define two parameters with the names PARAMETER_NAME and PARAMETER_NAME_2, as the same prefix in the names causes erroneous code generation.
355