AlertFind PostgreSQL Connector

Looking to Automate AlertFind Data Import? Yes! this is the place to be. Our PostgreSQL will make automation a breeze!

In order to perform a successful connection to your PostgreSQL database, we just need the following information and that's it!

  1. Host: This is the host name or the IP address of the server where the Database is located.

  2. Port: This is the port we use to connect and communicate with the Database.

  3. Database: This is the name of the database we are going to use to fetch the data.

  4. Schema: Please provide the name of the Schema that we are going to work with.

  5. Table: This is the name of the table or tables we are going to use to fetch the data.

  6. Valid username and password: Valid username and password to connect to the database.

  7. SSL: Please let us know if the connection to the Database has to be done with SSL.

Once we have a successful connection, it will enable us to populate your AlertFind instance with information such as:

  • ID
  • First Name
  • Middle Name
  • Last Name
  • Email
  • Personal Email
  • Cell Phone
  • Home Phone
  • Alternate Phone
  • Phone Extension
  • Company
  • Job Title
  • Department
  • Country
  • State
  • City 
  • Address
  • Zip Code

How does our PostgreSQL connector works.

The below graphic explains the various nodes involved in the transition of data from your PostgreSQL database to AlertFind.

Detailed Steps Explaining Above Graphic:

  1. PostgreSQL connector fetches the user information stored in your PostgreSQL database.

  2. Based on your chosen frequency our connector running on AWS Lambda, fetches data from the PostgreSQL database.

  3. At this stage, data goes through the validation and processing an is finally transformed into a CSV.

  4. The CSV is compressed into a ZIp File and sent to AlertFind API via the AlertFind connector running on AWS Lambda.

Data Storage Nodes in the Above Graphic:

  1. Node 5: Refers to Apache Cassandra Cluster which stores the data for execution purposes. It has a TTL of 7 days after workflow execution has finished.

  2. Node 6: Refers to Elastic Search Cluster used for indexing data for 30 days so that it can be searched within the logs interface.

  3. Data Safety: All data is encrypted in transit via HTTPS, using TLS 1.3 encryption internally. Sensitive customer data such as access tokens, usernames, passwords are encrypted at rest using Amazon Key Management Service (KMS), which uses FIPS 140-2 validated hardware security modules.

We encourage you to keep searching the connector that suits your needs in our main AlertFind Connector List under the section System of Records Integrations by clicking the button below.

AlertFind Connector List