Cloud and Hybrid TUTORIAL
Deploying Progress DataDirect Hybrid Data Pipeline on Amazon AWS
Updated: 26 Feb 2021
Introduction
Progress DataDirect Hybrid Data Pipeline is an innovative data access service for enterprises and cloud ISVs that is designed to simplify and streamline data integration between cloud, mobile, and on-premises sources through a secure, firewall-friendly integration. It is a transformative technology, abstracting away the complexity that has inevitably followed the recent deluge of data.
Hybrid Data Pipeline allows developers to build and manage data-centric applications faster and easier than ever before. SaaS ISVs can drive new wins through integration with customers’ legacy applications and data. IT can immediately provide a plug-and-play solution that extends the reach of BI and ETL, RESTify any database to improve developer productivity, and accelerate the delivery of exciting new services.
You can deploy Hybrid Data Pipeline on your servers anywhere in the world and with explosion in the use of cloud computing platforms like Azure, AWS etc., we put together this tutorial to help you deploy Hybrid Data Pipeline on Amazon.
Setting up VM in AWS
- To get started, you would need an Amazon AWS account and if you don’t have one register here and login into the portal.
- Once you have logged into the portal, create a new Red Hat 6.8 EC2 instance.
Installing Hybrid Data Pipeline
- To get an evaluation copy of Hybrid Data Pipeline, visit this page and fill up your details to be able to download the installer.
- To connect to your EC2 instance and to copy the installer from your local machine to EC2, follow these instructions from AWS official documentation.
- Once you have logged in, you should be able to find the installer that you have copied from your local machine in the home folder of the user.
- If the installer package is not executable, run the following command to make the package executable.
chmod +x PROGRESS_DATADIRECT_HDP_SERVER_LINUX_64_INSTALL.bin
- To start the installation, run the following command which will start the installer in console mode
./PROGRESS_DATADIRECT_HDP_SERVER_LINUX_64_INSTALL.bin
- During the installation,
- Make sure you read and understand the License agreement and accept the License agreement to continue the installation.
Fig: Hybrid Data Pipeline - License Agreement
- By default, the installation directory would be the following, but if you want to change it you are free to do so.
/home/users/<username>/Progress/DataDirect/Hybrid_Data_Pipeline/Hybrid_Server
Fig: Hybrid Data Pipeline – Install Folder setting
- Choose the type of installation when prompted for. If you are trying the Hybrid Data Pipeline, choose Evaluation as your option. If you have purchased a license, choose Licensed installation and enter your license key number to proceed further.
Fig: Hybrid Data Pipeline – Installation License Type
- When the installer asks you to enter the hostname for your server, enter the full domain name label of the EC2 instance. It will be in the format of xxxxx.compute-1.amazonaws.com
Fig: Hybrid Data Pipeline – Hostname configuration
- The installer tries to validate hostname, but it will fail. Ignore the validation and proceed ahead with the installation.
Fig: Hybrid Data Pipeline – Hostname validation
- When installer prompts for a SSL certificate File, select No to use the self-signed trust store that is included with the installation. If you have an SSL Certificate file that you want to use, you can provide path to that instead of using the certificate that comes with installer, by selecting Yes.
Fig: Hybrid Data Pipeline – SSL Certificate configuration
- To use default settings, you can choose Typical installation (1), but if you want to configure the installation with your own settings, choose custom installation (2).
Fig: Hybrid Data Pipeline – Installation Type
- Next, you should see Ready to Install information with all the configurations that you have made. Press ENTER to install Hybrid Data Pipeline with those settings.
- After the installation is complete, you will see an Install Complete message. To exit the installer, press ENTER.
Fig: Hybrid Data Pipeline – Installation Complete
- Before starting the Hybrid Data Pipeline server, we should configure the VM firewall and EC2 network inbound security rules to accept connections on 8080/8443 ports that the server uses for providing the services. Note that 8080 is for HTTP and 8443 is for HTTPS
- To configure the firewall on Redhat 6.8 to accept connections on 8080, 8443 ports, run the following commands
sudo iptables -I INPUT -p tcp -m tcp --dport 8080 -j ACCEPT
sudo iptables -I INPUT -p tcp -m tcp --dport 8443 -j ACCEPT
sudo service iptables save
- Now go to AWS EC2 dashboard and on the left navigation pane, expand Network & Security and open Security groups.
- Locate the security group for this instance and click on that row, you should see a bottom pane like below, where you can configure a new rule.
Fig: Inbound Security rules configuration on EC2
- Configure Inbound security rules for the EC2 instance to allow TCP connections on 8080 and 8443 ports as shown in below screenshot.
Fig: Inbound Security rules configuration on EC2
- Now test if the ports are open from a local machine by opening your terminal and running the command. If you are not able to connect, please make sure that you have properly configured the firewall and EC2 security rules by going through steps 10 to 12.
telnet xxxxx.compute-1.amazonaws.com 8443
- Restart the EC2 VM using the following command:
sudo shutdown -r now
Starting Hybrid Data Pipeline Server
- Now that we have installed and configured everything for Hybrid Data Pipeline to run properly, it’s time to start the server.
- Login on to EC2 VM through SSH and run the following commands to start Hybrid Data Pipeline server
cd /path/to/ Progress/DataDirect/Hybrid_Data_Pipeline/Hybrid_Server/ddcloud/
./start.sh
- Open your browser and browse to https:// xxxxx.compute-1.amazonaws.com:8443, which opens the Hybrid Data Pipeline Login screen. Use d2cadmin/d2cadmin as username and password to login into Hybrid Data Pipeline Dashboard. Following are couple of screenshots of Hybrid Data Pipeline for your reference.
Fig: Login Screen for Progress DataDirect Hybrid Data Pipeline
Fig: Data Stores Supported in Hybrid Data Pipeline
Congratulations
Now that you have successfully deployed the world’s most advanced Hybrid data access solution, feel free to configure your data sources in Hybrid Data Pipeline. Integrate the data in your applications using standards based ODBC, JDBC connectivity or using its REST API, one of the most advanced OData standard APIs.
Note that trial is valid for 90 days, and you will have complete access to use any of the data stores. To learn more about Progress DataDirect Hybrid Data Pipeline, you can visit this page or watch this short video overview.
If you are ready to configure the On-Premise Connector for Hybrid Data Pipeline on AWS, head over to this tutorial.