Introduction

You have been presented with a few pain points to solve around your company’s Redshift solution. The original Redshift cluster that was launched for the company’s analytics stack has become underpowered over time. Several groups wish to create incremental backups of certain tables to S3 in a format that can be plugged into data lake solutions, as well as other groups wishing to have select pieces of the main Redshift schema splintered to new department-specific clusters.

You’ve come up with a plan to utilize the UNLOAD and COPY commands to facilitate all of the above and need to test a proof of concept to ensure that all pain points above can be addressed in this manner.

We’ll utilize the Redshift UNLOAD and COPY commands to migrate data between an existing Redshift cluster, which we will launch in the course of completing the lab.

Solution

Log in to the AWS Management Console using the credentials provided for the lab. Make sure you’re in the us-east-1 region.

Investigate the Lab Environment

  1. Navigate to S3 using the Services menu or the unified search bar.
  2. Select the users-data-<ACCOUNT_NUMBER> bucket and note the bucket name. There are currently no objects in the bucket.
  3. Navigate to Amazon Redshift using the Services menu or the unified search bar.
  4. Select the users-cluster link to review the cluster’s configuration. You will recreate this cluster with the ID users-cluster-2.
  5. Select Editor from the sidebar menu to open the query editor.
  6. Click Connect to Database and fill in the connection details:
    • Connection: Create a new connection
    • Authentication: Temporary credentials
    • Cluster: users-cluster (Available)
    • Database name: users
    • Database user: users_admin
  7. After the database connection details are filled in, click Connect.
  8. In the Resources pane on the left, use the Select schema dropdown to select public. You should see 2 schemas: users_data and users_data_pkey.
  9. In the query editor, run the following query to review some sample data:select * from users_data limit 10;

Launch the Target Redshift Cluster

  1. Select Clusters from the sidebar menu, then click Create cluster.
  2. Configure the Cluster configuration settings:
    • Cluster identifier: users-cluster-2
    • Select the Free trial option to automatically set your cluster configuration.
  3. Configure the Database configurations settings:
    • Admin user name: users_admin
    • Admin user password: Enter a password (e.g., Strongpass1).
  4. Leave all other default settings and click Create cluster. The cluster takes some time to create.
  5. After the cluster is created, open it and select the Properties tab.
  6. Scroll down to Associated IAM roles and click Associate IAM role.
  7. Select the RedshiftS3 role and click Associate IAM roles.

Copy the Existing Redshift Table to S3

  1. Select Editor from the sidebar menu to open the query editor.
  2. Copy the UNLOAD statement and paste it into the query editor:UNLOAD ('select * from users_data') TO '<users-data-bucket>' IAM_ROLE '<RedshiftS3 ARN>' FORMAT AS PARQUET;
  3. Replace <users-data-bucket> with your S3 bucket name and preface the bucket name with s3://, so the bucket name follows the format s3://<users-data-bucket>.
  4. Replace <RedshiftS3 ARN> with the ARN provided in the lab resources.
  5. Click Run. If your session times out, reconnect to the users-cluster database.
  6. Open S3 in a new browser tab and select the users-data-<ACCOUNT_NUMBER> bucket name. You should see 2 .parquet objects.
  7. Go back to the query editor and click Change connection.
  8. Update the connection details:
    • Connection: Create a new connection
    • Authentication: Temporary credentials
    • Cluster: users-cluster-2 (Available)
    • Database namedev
    • Database user: users_admin
  9. Click Connect to connect to the cluster.
  10. In the Resources pane on the left, use the Select schema dropdown to select public. Note there are some default tables in the schema however, you must create a table that has the appropriate variables before you can copy data to the new cluster.
  11. Copy the create table statement and paste it into the query editor:create table users_data( id_value varchar(64), name_first varchar(64), name_last varchar(64), location_country varchar(32), dob_age int, picture_large varchar(64), primary key(id_value) ) distkey(location_country) compound sortkey(id_value);
  12. Click Run. You should now see 2 new public schemas: users_data and users_data_pkey.

Copy Data from S3 to the Newly Launched Redshift Cluster

  1. Copy the COPY statement and paste it into the query editor:COPY users_data FROM '<users-data-bucket>' IAM_ROLE '<RedshiftS3 ARN>' FORMAT AS PARQUET;
  2. Replace <users-data-bucket> with your S3 bucket name and preface the bucket name with s3://, so the bucket name follows the format s3://<users-data-bucket>.
  3. Replace <RedshiftS3 ARN> with the ARN provided in the lab resources.
  4. Click Run to run the query.
  5. After the COPY query is complete, run the following query to view some sample data:select * from users_data limit 10;

This data should be identical to the data you originally saw in users-cluster.

Check Your Data

  1. To verify the data is identical for both clusters, review the data in your users-cluster-2 cluster and note a couple of the entries.
  2. Click Change connection and fill in the connection details:
    • Connection: Use a recent connection
    • Recent connection: users-cluster (Available)
  3. Click Connect to connect to the cluster. Your select * from users_data limit 10; query should still be available in the query editor.
  4. Click Run to run the query again. Your data should be the same as the data you noted from users-cluster-2.
(Visited 21 times, 1 visits today)

Leave A Comment

Your email address will not be published. Required fields are marked *