InetSoft Product Information: Defining CORBA Architecture Data Sources

When defining a CORBA data source, the Data Modeler needs to import the IDL generated classes to analyze the method parameters. This requires the IDL definition to be properly compiled, and requires that classes generated from the IDL are accessible from the CLASSPATH.

Walkthrough

Compile the IDL definition file.

An IDL file can be compiled using the IDL compiler that comes with your CORBA software. The JDK1.2 and later comes with an IDL compiler. The IDL compiler in JDK 1.2.x is ‘idltojava’,

1. Make sure the class files are accessible from the CLASSPATH.

2. To use the example CORBA server, add the full path of the ‘guide’ directory to your CLASSPATH.

3. Make sure that all Java files generated from the IDL are properly compiled.

4. Launch the Data Modeler.

5. Select the ‘New Data Source’ button to create a new data source.

View a 2-minute introduction to InetSoft's serverless BI solution.

6. Enter the name of the data source and select ‘corba’ as the data source type. Click ‘OK’.

The CORBA data source uses the CORBA name service to locate a CORBA server. Therefore, a data source must contain the name of the name service, as well as the name of the CORBA server, in order to communicate with the server.

7. In the ‘bank’ data source definition pane, enter the name service. The default is ‘NameService’. Then enter the host and port number for the name service.

8. Enter the name of the CORBA server. The CORBA name service supports multi-component names. The component sequence forms a unique name in the name service. In the example, we use one component in the server name, “BankServer”.

9. Enter the CORBA server’s full class name in the ‘Interface’ field. The name must be the full name of the server object interface generated by the IDL compiler. For instance, the example Bank object is included in a module, ‘corba’. The Java interface name is ‘corba.Bank’. The BankServer class then implements the bank server object. The name entered in the interface field should be ‘corba.Bank’ instead of ‘corba.BankServer’.

10. Click on ‘Finish’. Select the ‘Import Server Class’ button to import the API definition.

At this point, the data source is almost complete. All methods in the CORBA server are listed as requests in this data source. The remaining task is to specify the parameter values for the method invocation. The parameters of a request are displayed in the ‘parameters’ pane of the bottom folder.

chart art
Read what InetSoft customers and partners have said about their selection of Style Report as their production reporting tool.

Articles About Connections on InetSoft

  1. Complete List Of Data Connectors

    A comprehensive compilation of the platform’s available connectors, this page catalogs cloud and on-premise data sources that can feed dashboards and reports. The article explains typical connector categories and highlights newer additions that expand cloud integration options. Practical notes clarify which data sources are best accessed via JDBC/ODBC and which use native adapters or web APIs. This resource helps teams quickly assess whether required systems are directly supported or require middleware. It is useful when scoping integrations or building a pilot with representative data sources.

  2. Database Connection Pooling Techniques

    This writeup describes connection pooling concepts used to improve performance and scalability when many users query the same database. It explains how a pool reduces overhead by reusing established connections and highlights configuration parameters that influence throughput. The article outlines threading and resource considerations that are important for high-concurrency deployments. Examples show how pooling reduces latency and stabilizes load on backend databases. Administrators will find practical guidance for tuning pools to match typical workload patterns.

  3. How To Specify Report Data Sources

    This page walks through configuring persistent storage and setting up a database as the report repository for the server. It lists supported database engines and details the fields required to define a data source, plus how to test a connection from the admin console. The instructions include tips on transaction isolation, schema mapping, and where configuration values are stored. Troubleshooting pointers describe common connectivity errors and corrective steps. The article serves as a step-by-step for initial environment setup and ongoing maintenance.

  4. Data Integration Software For Multiple Sources

    This overview highlights the platform’s data mashup and integration capabilities designed to unify disparate sources for analysis. Readers learn about supported ingestion methods including direct database connectors, web services, flat files, and API feeds. The article emphasizes transformation and cleansing features that simplify preparing heterogeneous data for dashboards. Use cases demonstrate how combining sources avoids expensive, time-consuming warehouse projects for many analytic needs. The piece is geared toward architects evaluating options for agile data access.

  5. Accessing Data Sources As Web Services

    This how-to shows how web services and XML/JSON feeds can be registered as data sources and consumed by the reporting engine. It explains mapping response structures to tabular data and best practices for polling or caching remote APIs. Security and authentication approaches are covered to ensure safe access to external endpoints. The article provides examples of common API integration scenarios and notes when to favor a live feed versus periodic extraction. It is practical guidance for teams that rely on third-party SaaS APIs as primary data inputs.

  6. Comprehensive List Of Supported Data Sources

    A catalog-style page enumerating commonly used data sources from A to Z, this resource clarifies which systems have native adapters and which use generic JDBC/ODBC access. The article helps teams quickly identify direct support for specialized systems and cloud platforms. It also explains limitations or special configuration steps required for certain sources. For integration planning, the list reduces uncertainty about connector availability and maintenance effort. The page functions as a quick reference during vendor evaluations or migration planning.

  7. Read how InetSoft saves money and resources with deployment flexibility.
  8. Real-Time Access To Multiple Data Sources

    This piece focuses on real-time connector capabilities that allow dashboards to present current data without a separate ETL pipeline. It surveys REST APIs, JDBC sources, NoSQL systems, and cloud warehouses as live inputs and explains where each approach fits operationally. Performance strategies such as selective caching and query optimization are discussed to preserve responsiveness. The article includes concrete examples where live feeds are mission-critical, such as IoT monitoring and operational dashboards. It guides decisions on when real-time connections are worth the added complexity.

  9. Data Modeling And Connection Validation

    This technical note shows how to model datasets that join tables from multiple connections and how to validate those links inside the designer. It explains the “Test Data Source” workflow and common pitfalls when drivers or credentials mismatch. Guidance covers join strategies, primary key selection, and reducing data duplication across blended sources. The article emphasizes verifying connections early to avoid surprises during report creation. It is aimed at developers and data modelers preparing production data models.

  10. Creating Reports From Database Sources

    This tutorial lays out the steps to connect to databases, build worksheets that join tables, and create reports from those joined datasets. Practical recommendations cover access credentials, filter design, and calculated fields used to transform raw values into analytical metrics. The page includes examples of combining CRM and transactional data to produce customer-centric reports. Tips for limiting dataset size and improving query performance are provided to keep reports responsive. The content is suited for analysts tasked with building operational reports directly off live databases.

  11. Tools For Live Database Dashboards

    This article compares tool features that enable live dashboards driven by direct database queries and streaming sources. It explains why certain connector patterns are preferred for near-real-time visualization and which backend features support fast refresh cycles. The write-up addresses trade-offs between query freshness and system load, recommending hybrid strategies where appropriate. Example scenarios include operational monitoring and financial tickers where up-to-date information is essential. The piece helps teams weigh the engineering costs of live connectivity against business value.

  12. Database Access Software And Driver Support

    This page summarizes the platform’s driver and protocol support for accessing relational databases, OLAP cubes, flat files, and other repository types. It clarifies expected behaviors when using JDBC drivers, ODBC bridges, and proprietary adapters. Setup instructions and notes about compatibility issues with specific database versions are provided to reduce integration friction. The article serves as a checklist for IT teams preparing network, driver, and security prerequisites. It assists in diagnosing connection failures caused by environmental mismatches.

  13. Good Ways To Connect Data Marts

    This piece proposes pragmatic approaches for combining multiple data marts into analysis without building an enterprise data warehouse. It explains mash-up techniques that let end users join datasets from different marts using the platform’s worksheet and modeling features. The article contrasts the agility of mash-ups with the governance advantages of centralized warehousing and recommends hybrid strategies. Examples illustrate how quick joins can accelerate analytics pilots and departmental reporting. The guidance is practical for teams that need fast insights without long ETL projects.

We will help you get started Contact us