guide
  • Introduction
  • Guiding Principles
    • Mission Statement
    • Conflict Resolution Process
  • Operating Model
    • Working Together
    • Holacracy
      • Meetings
      • Specific Roles
      • Terms and Definitions
      • Finer Points
      • Holacracy-Asana Key
    • Getting Things Done
      • Daily, Weekly, Monthly, and Annual Reviews
      • GTD-Asana Key
    • Transparency
    • Language
    • Budgeting
    • By Department
      • Engineering Operations
  • General Guidelines
  • Employment Policies
    • Equal Opportunity Employment
    • At-Will Employment
    • Code of Conduct in the Community
    • Complaint Policy
    • Drug and Alcohol Policy
    • Vacation, Holiday, and Paid Time Off (PTO) Policy
    • Supplemental Policies for Remote Employees and Contractors
    • Supplemental Policy for Bonus, Commissions, and other Performance-based Payments
    • Supplemental Policies for Hourly International Contractors or Workers
    • Supplemental Policies for Hourly International Contractors or Workers
    • Disputes and Arbitration
  • Benefits and Perks
    • Health Care
    • Vacation, Holiday and Paid Time Off (PTO) Policy
    • Holiday List
  • Hiring Documents
    • Acknowledgement of Receipt
    • Partner Proprietary Information and Inventions Agreement
  • Engineering Wiki
    • Code Snippets
      • Front End Code Snippets
    • Setup
      • 1: Overview of development using Audienti
      • 2: How to setup your dev environment on Docker
      • 2a: Setting up on our cloud your dev server
      • 3: Connect to Production using the VPN
      • 4: Import data into your development environment
    • Deployment
      • Docker based deployment of back end (manual)
    • Culture
      • How our development team works
      • Code Best Practices
    • Tips
      • Setting up a new development machine
      • Importing data to Development environment
      • GIT workflow and work tracking
      • Using Slack
      • Using Rubocop
      • Our Code Standards
      • General suggested best practices
      • Tracking your time
      • Naming Iterations
    • Migrations
      • Postgres
      • ElasticSearch
      • Redis
    • Database and System Maintenance
      • Redis Howtos
      • Elasticsearch HowTos
      • Postgres HowTos
      • Administration recipes
      • App maintenance crash course notes
    • Front End
      • 2016 Plan
      • Deploy
      • Assets
      • SearchLogic
      • How to create UI components
      • OMA Standard Tables
    • Monitoring and Alerting
      • Monitoring Systems
      • Monitoring individual controller actions
      • Get notified when a metric reaches a certain threshold
      • Instrumenting your models using Oma Stats
      • Configuring Graphite Charts
      • Tracking your results with StatsD
      • Logging Fields
      • Updating Kibana Filtering
    • Testing
      • Coverage
      • Elasticsearch mapping config synchronization
      • Testing Gotchas
      • Rspec Preloader
      • Test Best Practices
    • Models
      • Backlinks
    • Queueing and Worker System
      • Queueing and Job Overview
    • Processors
      • Rebuilding Spot Instances
      • Deploying processors
      • Running processors in development
      • Reverting to the previous build on a failed deployment
    • Processors / Opportunity Pipeline
      • Opportunity Pipeline
      • Diagram
    • Processors / Enrichment Pipeline
      • Diagram
      • Clustering
    • Processors / Backlink Pipeline
      • Diagram
      • Backlink Pipeline external APIs
      • Backlink pipeline logic
    • Processors / Automation Pipeline
      • Diagram
      • Automation Pipeline Overview
      • Agents
      • Running in development
    • Messaging and Social Accounts
      • Overview
    • API
      • Audienti API
    • Algorithms
    • Troubleshooting
      • Elasticsearch
    • Big Data Pipeline Stuff
      • Spark
    • Our Product
      • Feature synopsis of our product
    • Research
      • Backend framework comparison
      • Internet marketing Saas companies
    • Code snippets
      • Commonly Used
      • Not Used
    • Miscellaneous
      • Proxies and Bax
    • Legacy & Deprecated
      • Search criteria component
      • Classes list
      • Target Timeline
      • Twitter processor
      • Asset compilation
      • Test related information
      • Interface to EMR Hadoop jobs
      • Mongo Dex Indexes to be Built
      • Mongodb errors
      • Opportunity pipeline scoring
      • Graph Page
      • Lead scoring
      • Insights
      • Shard keys
      • Setting up OMA on local
      • Clone project to local machine
      • Getting around our servers in AWS
  • Acknowledgements
  • Documents That Receiving Your First Payment Triggers Acknowledgement and Acceptanace
Powered by GitBook
On this page
  • Postgres
  • Prerequisites
  • Dumping data
  • Loading data
  • ElasticSearch
  • Loading data
  • Other Notes
  1. Engineering Wiki
  2. Setup

4: Import data into your development environment

You need to import Postgres data and Elasticsearch data. You will do these in separate steps.

Postgres

Prerequisites

  • You will use the oma image for this task.

  • You can access the production data, meaning you need VPN access to production (or to be supplied a data.yml file).

  • You have created your oma_development database and loaded the database schema.

Dumping data

Skip this step if a data.yml has been supplied to you to by a colleague.

  • Login to production_local: $>dcplr oma bash --login

  • Type $>bundle installto bundle your local gems.

  • Type $>bundle exec rake db:data:dumpwhich will not give any logging, but is dumping the production database into a data.yml file that resides in your db folder.

  • Once complete, check that the data.yml file is there.

  • Exit production_local by typing: $>exit

Loading data

  • If a data.yml file has been supplied to you by a colleague, place this in the ./db folder. Otherwise, continue.

  • Login to development: $>dcdr oma bash --login

  • Type $>bundle installto bundle your local gems

  • Type $>bundle exec rake db:data:loadto load your data. This will take a while.

  • Type $>bundle exec rails cand then typeProject.countto ensure you have data in your development environment.

You are done with the Postgres portion of this task.

ElasticSearch

Importing data for ElasticSearch is different than for Postgres, in that we open a connection to the production servers and copy the data in real-time (no intermediate files). This action is performed in the oma-models repo, so you need to have that mapped into your environment by your docker-compose.override.yml file (the default). You will also choose a single project to import data for.

Loading data

  • Type $dcdr oma bash --login

  • Type $>cd ../oma-modelsto change to the oma-models repo.

  • Ensure you have an config/application_production_local.yml in your config folder, and that its connecting to your production ES database.

  • Ensure you have a config/application_development.yml file in your config folder, and that it's connecting to your development ES database.

  • Type $>bundle installto bundle gems.

  • Type $>bundle exec rake consoleto load the console. Ensure you are connected to the right database for writing by typing: $>Influencer13.gateway.client.transport.connections.connections.first.host. Make sure this is your local (docker) Elasticsearch and NOT your production database.

  • Type $>exitto exit the console.

  • Type $>rake es:dev_import:project[XXX]where XXX is the project id (1244).

Other Notes

  • Check out oma-models/db/elasticsearch/migrations/es_dev_import.rake for other commands. You can import single tables as well.

  • If you want a clean slate, you need to delete your indexes first, and the loader will only import new data.

Previous3: Connect to Production using the VPNNextDeployment

Last updated 7 years ago