Home PC Games Linux Windows Database Network Programming Server Mobile  
  Home \ Server \ Docker in the development and practice of IFTTT     - 10 useful tools for Linux users (Linux)

- Installation and Configuration Tomcat environment CentOS 6.6 (Server)

- Oracle multi-table query optimization (Database)

- Under CentOS yum install Nginx smooth switch mounted to Tengine (Server)

- Java reflection technology explain (Programming)

- Linux system performance tuning of Analysis (Linux)

- C ++ precision performance test function (Programming)

- Let CentOS perform PPPoE dial-up, ADSL can be used in a network environment! (Linux)

- Linux system security knowledge (Linux)

- DB2 table space is redirected to restore the database combat (Database)

- error no.2013 lost connection Tom with SQLServer during query (Database)

- Automated Password Generator: Linux under a special password generator (Linux)

- grep regular expression (Linux)

- CoreCLR compiled in Linux CentOS (Linux)

- Binary tree traversal: the first sequence in order preorder recursive and non-recursive and traversal sequence (Programming)

- To compiler and install MariaDB-10.0.20 under CentOS 6.6 (Database)

- Define and modify strings principle in Python (Programming)

- Android Delete project useless resource file (Programming)

- Linux operating system, the internal and external security overview (Linux)

- Learning C language pointer essays (Programming)

  Docker in the development and practice of IFTTT
  Add Date : 2017-08-31      
  IFTTT is "if this then that" abbreviation, in fact, is to make your network behavior can trigger a chain reaction, let you use more convenient, its purpose is to "Put the internet to work for you" (make the Internet work for you) . Docker in IFTTT also developing in practice, here are some of Nicholas Silva introduced.

IFTTT is currently in the process of structural shift from infrastructure to the container cartridge system. We have a large number of micro-services architecture and the container will be managed in accordance with this. Before moving our production environment architecture, we decided to start with the practice from the local development environment. Some of the problems so that we can find applications before venturing on the production environment.

Furthermore, a local development environment has deviated from our existing production environment. We use Chef (a systems integration framework that provides configuration management capabilities for the entire architecture) and Vagrant (used to build a virtual development environment tools) provide local management of virtual machines. Although it has been at work, but we know it will not work for long. We need to take action, rather than wasting time synchronized to a test environment to a production environment is being retired. So we decided to completely leapfrog directly to existing systems, direct development we want.

Here is how we let our engineers use development environment running Docker process.

IFTTT currently use Apple's engineers to develop, so here systems are Mac OS, because they do not consider cross platform so it will not be too complicated.

We collected all the code open source project in the Dash. The code is quite difficult to understand, if you blindly run will certainly waste a lot of time. Before running our first look at what it is doing.

Part1: Start project

We use Homebrew and Ansible run curlbash automate the entire process:

bash <(curl -fsSL https://raw.githubusercontent.com/IFTTT/dash/master/bin/bootstrap)

After boot script install Homebrew and Ansible successfully downloaded Dash code base, using Ansible install and configure VirtualBox, Docker, Docker Machine, Docker Compose and DNS development environment, and then compile Docker Machine VM.

Ansible typically used to manage remote servers, but can also be used to configure the local machine. The parameter catalog IP host parameters, you can use Ansible palybook perform a local task.

ansible-playbook /usr/local/dev-env/ansible/mac.yml -i, --ask-become-pass

There must be a comma after the IP address, because it makes the display name for the parameter list instead of a file.

Ansible playbook just a YAML file, which lists some of the running tasks and status. I will not go about this file, but if you are interested you can see the entire document.

By Ansible, we installed the Homebrew offer does not include binaries Caskroom, and a large number of packages and configuration files.

Docker Machine
Docker Compose
DNS .dev resolution to the VM
NFS exports
Shell Environment

Here it is interesting to compare DNS solution. Create a file in / etc / resolver / dev:


All .dev request is routed to the Docker Machine. Dnsmasq run a simple container, routing all requests .dev Back VM, another nginx proxy container routes the request to the appropriate container (Continued). Do not go to modify the / etc / hosts file! Host virtual machines from your local system when similar ifttt.dev domain, the request can be routed to the appropriate server.

Part2: Creating Docker Machine

In the dev command, I have several complex command aliases is a simple way to integrate file command. For example, create a dev machine we use:

docker-machine create \

--driver virtualbox

docker-machine scp \

/usr/local/dev-env/docker/bootsync.sh \

dev: /tmp/bootsync.sh

docker-machine ssh dev \

"Sudo mv /tmp/bootsync.sh /var/lib/boot2docker/bootsync.sh" docker-machine restart dev

This command means obvious. Using NFS and dev DNS, we want to copy the following script into a VM, and then restart the VM.

This script is not very complicated:

#! / Bin / sh

sudo umount / Users

sudo /usr/local/etc/init.d/nfs-client start

sleep 1

sudo mount.nfs / Users -v -o \

rw, async, noatime, rsize = 32768, wsize = 32768, proto = udp, udp, nfsvers = 3

grep '\ - \ - dns' / var / lib / boot2docker / profile || {

echo 'EXTRA_ARGS = "$ EXTRA_ARGS --dns \

--dns --dns " '| sudo tee -a \

/ Var / lib / boot2docker / profile


echo -e "nameserver \ nnameserver" \

| Sudo tee /etc/resolv.conf

First, uninstall and open standards vboxfs NFS client, then mount the shared directory created by the machine. Docker Machine boot program will be executed synchronously in this directory presence.

I also try to use a variety of ways to run the mount command, but few succeed. After get the ultimate success, I became NFS expert.

First go to dnsmasq container set other remaining part of the DNS file, the second step is to set the DNS server 8.8.X.X. This request also concerned dev domain and access the network. If you do not add 8.8.X.XDNS server, when your Docker Machine network changes, your DNS server's cache will automatically end, and had to restart the machine via the network to convert.

At this time you should be able to connect to the VM Docker process running. You can see how to connect by running Docker Machine dev environment. You can also test Docker ps is running has installed Docker, you can see:


See this show that you have the correct installation configuration environment.

Part3: began to develop in the container

Development in the container requires some thinking on the conversion, and many times we have to write the code, run the code, test, etc. are not the same. Do you have one or more local databases, or may simulate S3 and DynamoDB dependence.

In use of the container prior to the development of local development, you may be able to directly turn off the machine or OS virtual machine, or you can install all the software you need, over time, you can keep adding to the system configuration and procedures . It may also be a Snowflake Server. There may be some problems at the beginning of managing dependencies, so like Bundler, pip, virtualenv and RVM they can be unified in combination to help solve problems. Although you can test out this is a new version of MYSQL, but actually doing it is not so simple.

In the context of the container, you do not need to have a lasting and continuous improvement to the development environment. You can do what you want, but this is not the recommended work flow mode. Have traditionally a code can run on top "VM", to be replaced is to create a more lightweight visualization called "container" layer. (Learn more, get more help from Docker)

These containers are created from the image. Nature mirroring is some basic read-only templates, you can create your application development environment through the template, if all things related to mobile container, the container will be moved. You can always restart the same mirrored by the container, of course, this is often done. This means you can have different containers do different configurations, but nothing to worry about managing chaos. You can use the Ruby2 above a code, the other is Ruby1.9 code. Create a container (based on Ruby mirror previously used in) just to use a new gem. When you install the gem processing vessel, in other reasons, Rails2 need to rely on a lot of other gem installed, you have to consider the gem installed on your system.

To get more "official" application relies on (node, mysql-client, etc.), when running a container for all applications exist, you can use Dockerfile test your application libraries to create a mirror image already contains these programs .

Dependent on the presence of many situations, we usually separate process, dividing the additional container. For example we have one container, while relying Mysql, redis and S3. At the same time we put them into our container combination. Through the project root directory YAML files integrated into our vessel. E.g:



dockerfile: Dockerfile.development


-:. / App


- Redis: redis.myapp.dev

- S3: s3.myapp.dev

- Mysql: mysql.myapp.dev

command: puma -C config / puma.development.rb


image: mysql / mysql-server: 5.6


- / Mnt / sda1 / var / lib / mysql: / var / lib / mysql


image: redis: 2.6


image: bbcnews / fake-s3

With this setting, it is easy to see the dependency relationships. We have constructed web services from the directory and Dockerfile .development files. This file is stored as part of the container hang, the code to execute some access to this directory can run the code. Separated Dockerfile is the most important, because in the production we compile all the code from this file in the form of a snapshot into the container, so the development environment if we want to change it, it must be recompiled again to achieve. We cover the command has been defined in Dockerfile, so we can load the configuration of the development environment.

When we started running the web container, Docker generation will realize that he is associated with redis, mysql, s3 and other containers, so they will start automatically. In addition, Docker also dev domain names and the correct container write / etc / hosts file. Because this operation if other people do not need any configuration, so a lot easier. We can use a specific version of redis and mysql. Mysq loaded in the virtual machine's directory so that the data can be done in this expansion vessel lasting preservation.

Each project has a custom application-level configuration. This configuration file needs to run Dockerfile, other services running in the docker-compose.yum file. For example let's start a new project, use the dev command Dash:

git clone [GIT_URL]

dev up # (simply an alias to docker-compose up -d)


Part 4: Inter Services Communications & Web Browser

You might think, how to see the results compiled? This is not the local environment. Quite right, I have not talked about how to request access to your container.

If you have been to develop the operating system instead of through a virtual machine, you may often use http: // localhost: 8000 to access your APP. And in Docker Docker Machine, and now are two entirely different abstraction.

Then in the event of exchange, increasing the concept of separation. Container isolated each of our services, we can better understand what is the real service.

In order to achieve the same local environment and simple level, we must also do something about the job.

Like I have said in Part1, like a small dns server environment, do not include all requests dnsmasq, Dev TLD have been routed to the Docker Machine VM. We just need to how these requests can be routed to the correct container. Unfortunately, the default network interface and the container is not exposed, IP VM interior of the container is randomly assigned. You can bind a host port to a port on the container, but must carefully handle the conflict between.

This is Jason Wilder of Nginx reverse proxy container was talking about. It uses the principle of internal concern Docker container switch, and then dynamically configured through nginx reverse proxy. It is also bound by the 80 ports VM. A new container contains VIRTUAL_HOST environment variable can be a lot of traffic routed to the vessel. Since both can run in real time, we can easily add two lines of code into the Docker-compose.yml in:




- VIRTUAL_HOST = myapp.dev # For nginx proxy

Stop the container (dev stop the web), remove the container (dev delete the web), and then start all backup (dev environment). This is another centralized environment need to convert Thinking example. First, I stop the service, and then change the new environment variable, reopen. Because the new environment variables to the new vessel, the best way is to remove the container and start again.

Reverse proxy also solved the problem of service development: how to define I'm developing two dependencies between services?

The two different services defined in two different containers docker-compose.yml can make him become more complex. Every time they build one other vessel, it will depend on the cycle refers to another vessel to enter the nightmare. However, using our dnsmasq containers, each request is routed to the dev nginx. As long as your service in dev TLD and registered under the virtual machine, then each request can request all of the links. We have our own developer portal, developers can set ifttt.dev identity, so that you can request access ifttt.dev internal procedures. If you access the program is not running, nginx returns 503.

Part5: using the package management

Production code for it, Dockerfile step by step to install the associated package that makes sense. For Ruby project, we are using Bundler do. We can create a mirror bundler runtime installed, but you must ensure that the package exists under circumstances can run successfully. Bundler is not necessary but the whole process of packing.

However, in the development environment is different, if every time we install a complete Bundler added an own gem, it will become very fragile. Even worse is if you are not included in the Dockerfile, you will have to restart each time a new container! Fortunately, we have a better way to solve this problem:




- Bundler-cache



image: ruby: 2.2

command: / bin / true


- / Usr / local / bundle

By creating another mirror with Ruby based services as part of our application (using Dockerfile defined), we can use some of the Docker something deeper inside. bundler-cache container defines a gem install to use the storage capacity of your system path, a run. Even if the container is not active, we can also be mounted from the bundler-cache on the storage containers. If you delete the web container, we can still keep this bundler-cache container, waiting for the next time you re-create the container, mounted directly after the storage capacity of all gem existed. If you want to clear the cache and reboot, it is easy to develop bundler-cache can be deleted directly.

For each project using the package management model, we find that it is a very quick and easy to install management approach. But the biggest problem is that if you delete the outside bundler-cache, you must reinstall all gem.

to sum up

Containerized and Docker infrastructure layer is a very good tool. If you plan to transfer to a container after, I highly recommend that you first try in local development environment. Since the beginning of the deployment inside Dash, we see new development staff management time ranging from days to hours. We can do the migration (including the actual change Dash itself) within a week, our engineers have begun to make its own contribution to this.
- PHP 7.0 Upgrade Notes (Linux)
- Getting CentOS Learning Notes (Linux)
- RHEL / CentOS / Fedora Install Nagios 4.0.1 (Linux)
- Embedded Linux Optimization (Programming)
- Ubuntu system process is bound CPU core (Linux)
- Linux hybrid character device (Linux)
- CentOS terminal display Chinese (Linux)
- User and user group management Linux Command (Linux)
- Install the free open source financial software GnuCash 2.6.6 under Ubuntu (Linux)
- How to find out a Unix system library files are 32-bit or 64-bit (Linux)
- Use OpenSSL to generate a certificate detailed process (Linux)
- ORA-12154: TNS: could not resolve the connect identifier specified solve (Database)
- Python decorators to learn and practice the actual usage scenarios (Programming)
- Python, and / or (Programming)
- Linux under DB2SQL1024N A database connection does not exist. SQLS (Database)
- How to Install Telegram instant messaging software on Ubuntu (Linux)
- Understand ASP.NET 5 running the command: DNVM, DNX, and DNU (Server)
- Achieve camera preview by ffplay (Linux)
- Linux memory Cache Analysis (Linux)
- Workspace Go language and environment variables GOPATH (Linux)
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.