Real-time Data Replication — Discovering the Oracle GoldenGate REST API

Jan Leemans
7 min readApr 30, 2024

--

Manage the lifecycle of your Oracle GoldenGate environments through simple REST API calls

In my previous article on Oracle GoldenGate I introduced the new LiveLab I developed that allows you to experience the power of Oracle GoldenGate to integrate between an Oracle Database and an Apache Kafka stream.

In that lab all the configuration operations are performed manually through the various management consoles of Oracle GoldenGate 21c. But once you start implementing Oracle GoldenGate in a real environment you want to automate these tasks as much as possible, and probably implement some sort of Software Development Life Cycle to propagate versions from one environment to another.

Thanks to the Microservices architecture of Oracle GoldenGate you also have the option to use the REST API’s of the various services that constitute the Oracle GoldenGate environment. This gives you more flexibility in managing all the various components of the environment from a single location.

The architecture

In order to understand the different activities I’ll be describing, you need to have the basic architecture of the Oracle GoldenGate 21c Microservices in mind:

Example Oracle GoldenGate architecture on 2 physical machines
Oracle GoldenGate Microservices running on 2 physical machines

On every location where you run Oracle GoldenGate, you will have one Service Manager, and then 4 services for every deployment type:

  • An Administration Service,
  • a Distribution Service,
  • a Receiver Service,
  • and a Performance Metrics Service.

In the case of the LiveLab setup, I’m using 2 deployments on a single machine, one Oracle Database deployment and one BigData deployment.

The basic flow

In this article I’ll walk you through the steps to do the exact same setup of a replication flow as described in the LiveLab, but now only using REST API calls.

Here’s a list of the activities I’ll be running you through:

- Checking your connectivity: in order to talk to your Oracle GoldenGate environment through REST, you need to authenticate with your username and password.

- Configure the Oracle Database connectivity and create the TRANDATA tables

- Set up the EXTRACT process from the Oracle Database to capture all changes on the tables of a schema

- Configure the Distribution and Receiver services, to simulate separate physical deployments of the Database and the Kafka stream. These 2 processes will handle the sending of the Change records from one machine to another.

- Configure the Replicat process that will be responsible for inserting the changes into the the Apache Kafka stream.

- Launch a load script on the source database to start inserting data, and observe the replication take place towards the Apache Kafka stream.

Some important sources of information you might want to keep close when running yourself through this process:

- LiveLab setup process: we advise you to use the “Green button” of the LiveLab, which will provision a fully installed environment for you in a few minutes.

- GoldenGate API Reference Guide: the documentation of all the available

Diving into the details

If you don’t have a GoldenGate environment available, you can use the LiveLab to easily spin up an environment, either temporarily using the green button, or more permanently on an OCI tenancy you have access to with the brown button. Once the environment is up and running, note down the public IP address of the VM.

I’ll be running these commands with cURL on a Mac, so you might to need to check your escape characters when using another type of environment.

Make sure to check that the Oracle Database Listener is running. To check this, connect to the instance as user “oracle” and start the listener:

# ssh to connect, sudo su to Oracle to start listener
export GG_IP=<Your VM IP address>
ssh opc@$GG_IP
sudo su - oracle
lsnrctl start

Security

To access the GoldenGate services through REST, you simply need to pass your username and password using the -u parameter of cURL.

Below some commands to check the connectivity:

# check simple connectivity, no authentication yet
curl -X GET $GG_IP:10100/services

# check connectivity with authentication:
curl -u oggadmin:oggadmin -s -X GET $GG_IP:10100/services/v2/messages

The first command should return some JSON specifying that $GG_IP:10100/services/v2 is the current available version of the API :

JSON output of the command
JSON Output of the /services interface

PS: I’ve actually declared an alias “beau” to beautify the json output, so I can simply pipe my cURL commands into this alias for better readability of the output:

alias beau="python -m json.tool |pygmentize -l json"

The second command should display a long list of system messages. If you get a connection error, check your security setup or the syntax of your command.

Creating the Database Connection

We’ll first create a Database connection, called OGG3 in the example. To do this, we just create a new credential, which will automatically create an associated database connection.

Note we’re now interacting with the Database Admin service, which is listening on port 10100.

# Create DB Credential and connection:
curl -u 'oggadmin:oggadmin' \
-X POST $GG_IP:10100/services/v2/credentials/OracleGoldenGate/OGG3 \
-H 'Content-Type: application/json' -H 'Accept: application/json' \
-d '
{"userid": "C##GGADMIN@ORCLCDB", "password": "Welcome#123" }'

# list all connections
curl -u oggadmin:oggadmin -X GET $GG_IP:10100/services/v2/connections

# list the OGG3 connection
curl -u oggadmin:oggadmin \
-X GET $GG_IP:10100/services/v2/connections/OracleGoldenGate.OGG3

You can either use your browser to open the console on port 10100 and check you can indeed see the connection, or you can issue the corresponding command through the API to validate the creation was successful.

Next step is to test the connectivity to the server, and create the TRANDATA table with the following commands:

# Test the connection :
curl -u 'oggadmin:oggadmin' \
-X GET $GG_IP:10100/services/v2/credentials/OracleGoldenGate/OGG3/valid

# Add Trandata :
curl -u 'oggadmin:oggadmin' \
-X POST $GG_IP:10100/services/v2/connections/OracleGoldenGate.OGG3/trandata/schema \
-H 'Content-Type: application/json' -H 'Accept: application/json' \
-d '{"operation":"add", "schemaName":"ORCLPDB1.SOURCE_APP" }'

Creating the Extract and the Distribution config

On the side of the Database deployment, we’ll be configuring an Extract to capture database changes, and running a few commands to start and validate the configuration:

# Create the extract
curl -u 'oggadmin:oggadmin' -X POST $GG_IP:10100/services/v2/extracts/E_ORACLE \
-H 'Content-Type: application/json' -H 'Accept: application/json' \
-d '{
"config": [
"EXTRACT E_ORACLE",
"USERIDALIAS OGG3 DOMAIN OracleGoldenGate",
"EXTTRAIL ex",
"TABLE ORCLPDB1.SOURCE_APP.*;"],
"source":"tranlogs",
"credentials":{"alias":"OGG3"},
"registration":{"containers": ["ORCLPDB1"]},
"begin":"now",
"targets":[{"name":"ex"}]}'

# List the extract
curl -u 'oggadmin:oggadmin' -X GET $GG_IP:10100/services/v2/extracts

# Start the extract process
curl -u 'oggadmin:oggadmin' -X POST $GG_IP:10100/services/v2/commands/execute \
-H 'Content-Type: application/json' -H 'Accept: application/json' \
-d '{"name": "start", "processName":"E_ORACLE"}'

# check status of the extract
curl -u 'oggadmin:oggadmin' -X GET $GG_IP:10100/services/v2/extracts/E_ORACLE

Please note it is also possible to create a JSON document with the configuration, and pass the link to that file in the command. If you would create a config file called “extract_conf.json”, you can run the command as below:

curl -u 'oggadmin:oggadmin' -X POST $GG_IP:10100/services/v2/extracts/E_ORACLE \
-H 'Content-Type: application/json' -H 'Accept: application/json' \
-d @extract_config.json

Next we’ll be configuring a Distribution config to send the trail files to the machine where the Apache Kafka config runs. In this case this is actually the same machine, so we could skip this step, but this illustrates better the capabilities of the solution:

# Attention, now addressing 10101 (Distri server)
# Create the Distribution service
curl -u 'oggadmin:oggadmin' -X POST $GG_IP:10101/services/v2/sources/ORA2KAFKA \
-H 'Content-Type: application/json' -H 'Accept: application/json' \
-d '{
"name": "ORA2KAFKA",
"source": {"uri": "trail://$GG_IP:10101/services/v2/sources?trail=ex"},
"target": {
"isDynamicOggPort": false,
"uri": "ogg://localhost:10202/services/v2/targets?trail=rt"}}'

# List the available distributions
curl -u 'oggadmin:oggadmin' -X GET $GG_IP:10101/services/v2/sources

# Start the Distribution service
curl -u 'oggadmin:oggadmin' -X PATCH $GG_IP:10101/services/v2/sources/ORA2KAFKA \
-H 'Content-Type: application/json' -H 'Accept: application/json' \
-d '{"name":"ORA2KAFKA","$schema":"ogg:distPath", "status":"running"}'

# show status of distribution service
curl -u 'oggadmin:oggadmin' -X GET $GG_IP:10101/services/v2/sources/ORA2KAFKA

We can check to see that the corresponding Receiver service configuration was also created:

# Check Receiver service of BigData - switch to 10202
curl -u 'oggadmin:oggadmin' -X GET $GG_IP:10202/services/v2/targets
# Display details of ORA2KAFKA - use the correct
curl -u 'oggadmin:oggadmin' -X GET $GG_IP:10202/services/v2/targets/path11

As we are now receiving the trail files on the side of the BigData deployment, we can configure the Replicat service to insert the records into the Apache Kafka stream.

To do this we first have to create a parameter file with the same name as the Replicat we’ll be creating:

# List the files already on the server:
curl -u 'oggadmin:oggadmin' -X GET $GG_IP:10200/services/v2/config/files

# Create the config file
curl -u 'oggadmin:oggadmin' -X POST $GG_IP:10200/services/v2/config/files/R_KAFKA2.properties \
-d '{
"lines": [
"# Properties file for Replicat R_KAFKA",
"#Kafka Handler Template",
"gg.handlerlist=kafkahandler",
"gg.handler.kafkahandler.type=kafka",
"",
"gg.handler.kafkahandler.kafkaProducerConfigFile=/home/oracle/scripts/kafka_producer.properties",
"",
"gg.handler.kafkahandler.topicMappingTemplate=${tableName}",
"gg.handler.kafkahandler.keyMappingTemplate=${primaryKeys}",
"gg.handler.kafkahandler.mode=op",
"gg.handler.kafkahandler.format=json",
"gg.handler.kafkahandler.format.metaColumnsTemplate=${objectname[table]},${optype[op_type]},${timestamp[op_ts]},${currenttimestamp[current_ts]},${position[pos]}",
"",
"gg.classpath=/opt/kafka/libs/*",
"jvm.bootoptions=-Xmx512m -Xms32m"]}'

# Create the replicat
curl -u 'oggadmin:oggadmin' -X POST $GG_IP:10200/services/v2/replicats/R_KAFKA2 \
-d '{
"config":[
"REPLICAT R_KAFKA2",
"MAP ORCLPDB1.SOURCE_APP.*, TARGET TARGET_APP.*;"],
"source": {"name": "rt"},
"mode":{ "type":"nonintegrated", "parallel": false},
"registration":"none"}'

# List all replications
curl -u 'oggadmin:oggadmin' -X GET $GG_IP:10200/services/v2/replicats

# Start the Replicat
curl -u 'oggadmin:oggadmin' -X POST $GG_IP:10200/services/v2/commands/execute \
-H 'Content-Type: application/json' -H 'Accept: application/json' \
-d '
{"name": "start","processName":"R_KAFKA2"}'

# List R_KAFKA repli
curl -u 'oggadmin:oggadmin' -X GET $GG_IP:10200/services/v2/replicats/R_KAFKA2

Validating the replication

That’s it, you finished the setup of the configuration. It would be pretty simple to automate the relevant steps of this process to easily create a specific configuration on a distributed environment.

Let’s now run some inserts into the Oracle Database and observe what is happening with the various processes we configured

To run the loader, you need to use the terminal session on the remote machine to launch the script ./scripts/load.sh and run it as the user “oracle”.

Once this is running, you can use the below commands to see what is happening in the Oracle GoldenGate environment:

# Check status of replicat
curl -u 'oggadmin:oggadmin' \
-X POST $GG_IP:10200/services/v2/replicats/R_KAFKA2/command \
-H 'Content-Type: application/json' -H 'Accept: application/json' \
-d '{"command": "GETLAG"}'

# Look at stats of the Distribution and Rceiver service:
curl -u 'oggadmin:oggadmin' \
-X GET $GG_IP:10101/services/v2/sources/ORA2KAFKA/stats

curl -u 'oggadmin:oggadmin' -X GET $GG_IP:10202/services/v2/targets/path11

# Report on Extract
curl -u 'oggadmin:oggadmin' \
-X GET $GG_IP:10100/services/v2/extracts/E_ORACLE/info/reports/E_ORACLE.rpt

Conclusion

I hope you now have a better understanding of the capabilities of Oracle GoldenGate configuration and monitoring possibilities using the REST API. This mechanism allows you to easily automate these activities, enabling you to incorporate them into your existing CI/CD pipelines for automating your Data Management environments.

Sources

Big thanks to the authors of the below articles, which helped me to get up to speed on this topic faster:

--

--

Jan Leemans
Jan Leemans

Written by Jan Leemans

Technology enthusiast, amateur cyclist, and Business Development Director for Oracle EMEA, focusing on Application Development and Data Management

No responses yet