Static Google Cloud Storage agent

Target Google BigQuery

Prerequisites

To have Gluesync working on your Google BigQuery instance you will need to have:

  • A Google Cloud Project ID;

  • Target tables required to be created and having a primary key defined;

  • A configured Service account with:

    • Permission to write to the target BigQuery dataset;

    • A key (in JSON format) associated with this account created via your GCP console;

Setup via Web UI

  • Project Id: Your Google Cloud Project Id;

  • Service Agent (.json) file: (required, defaults to NULL) The Service Agent JSON file; The certificate must be uploaded to the agent, to do it, you need to either upload it via the UI or mount the certificate as a volume.

  • Dataset location: (defaults to US) The location of the dataset to sync, please refer to the following documentation for more details.

Looking for more information about Google Service Account? Check out the Service Account documentation from Google.

Custom host credentials

  • Dataset location: (defaults to null, meaning US) The location of the dataset to sync.

Specific configuration

This agent does not implemented any specific configuration.

Setup via Rest APIs

Here following an example of calling the CoreHub’s Rest API via curl to setup the connection for this Agent.

Connect the agent

curl --location --request PUT 'http://core-hub-ip-address:1717/pipelines/{pipelineId}/agents/{agentId}/config/credentials' \
--header 'Content-Type: application/json' \
--header 'Authorization: ••••••' \
--data '{
        "hostCredentials": {
        "connectionName": "myAgentNickName",
        "host": "project-id",
        "port": 443,
        "username": "",
        "password": "",
        "disableAuth": false,
        "certificatePath": "/myPath/service-agent.json"
    },
    "customHostCredentials": {
        "datasetLocation": "EU"
    }
}'