Home IT Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Docker in the development and practice of IFTTT     - Eclipse distributed management using GitHub project development (Linux)

- Linux operating system security management skills (Linux)

- STL source code analysis - iterator each container classification (Programming)

- JavaScript: understanding regular expressions (Programming)

- Red Hat Enterprise Linux configuration VNC multi-user access methods (Linux)

- Puppet centralized configuration management system (Server)

- Lambda expressions of evolution (Programming)

- Using packet capture libpcap be reconciliation package in Ubuntu 14.04 64 bits (Linux)

- Linux some lessons learned about network security (Linux)

- Teach you self-built Linux firewall free (Linux)

- Some of the bibliographic management tools to good use on linux (Linux)

- Repair Chrome for Linux is (Linux)

- Linux Detailed instructions alias settings (Linux)

- Ubuntu update bug fixes Daquan (Linux)

- Ubuntu 12.04 installation OpenCV2.4.1 and compile test (Linux)

- Function Getting the Linux shell (Programming)

- Use window.name + iframe cross-domain access to data Detailed (Programming)

- Android Sets the system screen brightness (Programming)

- Defensive programming in PHP (Programming)

- Linux iptables firewall settings whitelist (RHEL 6 and CentOS 7) (Linux)

 
         
  Docker in the development and practice of IFTTT
     
  Add Date : 2017-08-31      
         
       
         
  IFTTT is "if this then that" abbreviation, in fact, is to make your network behavior can trigger a chain reaction, let you use more convenient, its purpose is to "Put the internet to work for you" (make the Internet work for you) . Docker in IFTTT also developing in practice, here are some of Nicholas Silva introduced.

IFTTT is currently in the process of structural shift from infrastructure to the container cartridge system. We have a large number of micro-services architecture and the container will be managed in accordance with this. Before moving our production environment architecture, we decided to start with the practice from the local development environment. Some of the problems so that we can find applications before venturing on the production environment.

Furthermore, a local development environment has deviated from our existing production environment. We use Chef (a systems integration framework that provides configuration management capabilities for the entire architecture) and Vagrant (used to build a virtual development environment tools) provide local management of virtual machines. Although it has been at work, but we know it will not work for long. We need to take action, rather than wasting time synchronized to a test environment to a production environment is being retired. So we decided to completely leapfrog directly to existing systems, direct development we want.

Here is how we let our engineers use development environment running Docker process.

IFTTT currently use Apple's engineers to develop, so here systems are Mac OS, because they do not consider cross platform so it will not be too complicated.

We collected all the code open source project in the Dash. The code is quite difficult to understand, if you blindly run will certainly waste a lot of time. Before running our first look at what it is doing.

Part1: Start project

We use Homebrew and Ansible run curlbash automate the entire process:

bash <(curl -fsSL https://raw.githubusercontent.com/IFTTT/dash/master/bin/bootstrap)


After boot script install Homebrew and Ansible successfully downloaded Dash code base, using Ansible install and configure VirtualBox, Docker, Docker Machine, Docker Compose and DNS development environment, and then compile Docker Machine VM.

Ansible typically used to manage remote servers, but can also be used to configure the local machine. The parameter catalog IP host parameters 127.0.0.1, you can use Ansible palybook perform a local task.

ansible-playbook /usr/local/dev-env/ansible/mac.yml -i 127.0.0.1, --ask-become-pass


There must be a comma after the IP address, because it makes the display name for the parameter list instead of a file.

Ansible playbook just a YAML file, which lists some of the running tasks and status. I will not go about this file, but if you are interested you can see the entire document.

By Ansible, we installed the Homebrew offer does not include binaries Caskroom, and a large number of packages and configuration files.
include:

VirtualBox
Docker
Docker Machine
Docker Compose
DNS .dev resolution to the VM
NFS exports
Shell Environment


Here it is interesting to compare DNS solution. Create a file in / etc / resolver / dev:

nameserver 192.168.99.100


All .dev request is routed to the Docker Machine. Dnsmasq run a simple container, routing all requests .dev Back VM, another nginx proxy container routes the request to the appropriate container (Continued). Do not go to modify the / etc / hosts file! Host virtual machines from your local system when similar ifttt.dev domain, the request can be routed to the appropriate server.

Part2: Creating Docker Machine

In the dev command, I have several complex command aliases is a simple way to integrate file command. For example, create a dev machine we use:

docker-machine create \

--driver virtualbox

docker-machine scp \

/usr/local/dev-env/docker/bootsync.sh \

dev: /tmp/bootsync.sh

docker-machine ssh dev \

"Sudo mv /tmp/bootsync.sh /var/lib/boot2docker/bootsync.sh" docker-machine restart dev


This command means obvious. Using NFS and dev DNS, we want to copy the following script into a VM, and then restart the VM.

This script is not very complicated:

#! / Bin / sh

sudo umount / Users

sudo /usr/local/etc/init.d/nfs-client start

sleep 1

sudo mount.nfs 192.168.99.1:/Users / Users -v -o \

rw, async, noatime, rsize = 32768, wsize = 32768, proto = udp, udp, nfsvers = 3

grep '\ - \ - dns' / var / lib / boot2docker / profile || {

echo 'EXTRA_ARGS = "$ EXTRA_ARGS --dns 192.168.99.100 \

--dns 8.8.8.8 --dns 8.8.4.4 " '| sudo tee -a \

/ Var / lib / boot2docker / profile

}

echo -e "nameserver 8.8.8.8 \ nnameserver 8.8.4.4" \

| Sudo tee /etc/resolv.conf


First, uninstall and open standards vboxfs NFS client, then mount the shared directory created by the machine. Docker Machine boot program will be executed synchronously in this directory presence.

I also try to use a variety of ways to run the mount command, but few succeed. After get the ultimate success, I became NFS expert.

First go to dnsmasq container set other remaining part of the DNS file, the second step is to set the DNS server 8.8.X.X. This request also concerned dev domain and access the network. If you do not add 8.8.X.XDNS server, when your Docker Machine network changes, your DNS server's cache will automatically end, and had to restart the machine via the network to convert.

At this time you should be able to connect to the VM Docker process running. You can see how to connect by running Docker Machine dev environment. You can also test Docker ps is running has installed Docker, you can see:

CONTAINER ID IMAGE COMMAND CREATED


See this show that you have the correct installation configuration environment.

Part3: began to develop in the container

Development in the container requires some thinking on the conversion, and many times we have to write the code, run the code, test, etc. are not the same. Do you have one or more local databases, or may simulate S3 and DynamoDB dependence.

In use of the container prior to the development of local development, you may be able to directly turn off the machine or OS virtual machine, or you can install all the software you need, over time, you can keep adding to the system configuration and procedures . It may also be a Snowflake Server. There may be some problems at the beginning of managing dependencies, so like Bundler, pip, virtualenv and RVM they can be unified in combination to help solve problems. Although you can test out this is a new version of MYSQL, but actually doing it is not so simple.

In the context of the container, you do not need to have a lasting and continuous improvement to the development environment. You can do what you want, but this is not the recommended work flow mode. Have traditionally a code can run on top "VM", to be replaced is to create a more lightweight visualization called "container" layer. (Learn more, get more help from Docker)

These containers are created from the image. Nature mirroring is some basic read-only templates, you can create your application development environment through the template, if all things related to mobile container, the container will be moved. You can always restart the same mirrored by the container, of course, this is often done. This means you can have different containers do different configurations, but nothing to worry about managing chaos. You can use the Ruby2 above a code, the other is Ruby1.9 code. Create a container (based on Ruby mirror previously used in) just to use a new gem. When you install the gem processing vessel, in other reasons, Rails2 need to rely on a lot of other gem installed, you have to consider the gem installed on your system.

To get more "official" application relies on (node, mysql-client, etc.), when running a container for all applications exist, you can use Dockerfile test your application libraries to create a mirror image already contains these programs .

Dependent on the presence of many situations, we usually separate process, dividing the additional container. For example we have one container, while relying Mysql, redis and S3. At the same time we put them into our container combination. Through the project root directory YAML files integrated into our vessel. E.g:

web:

build:.

dockerfile: Dockerfile.development

volumes:

-:. / App

links:

- Redis: redis.myapp.dev

- S3: s3.myapp.dev

- Mysql: mysql.myapp.dev

command: puma -C config / puma.development.rb

mysql:

image: mysql / mysql-server: 5.6

volumes:

- / Mnt / sda1 / var / lib / mysql: / var / lib / mysql

redis:

image: redis: 2.6

s3:

image: bbcnews / fake-s3


With this setting, it is easy to see the dependency relationships. We have constructed web services from the directory and Dockerfile .development files. This file is stored as part of the container hang, the code to execute some access to this directory can run the code. Separated Dockerfile is the most important, because in the production we compile all the code from this file in the form of a snapshot into the container, so the development environment if we want to change it, it must be recompiled again to achieve. We cover the command has been defined in Dockerfile, so we can load the configuration of the development environment.

When we started running the web container, Docker generation will realize that he is associated with redis, mysql, s3 and other containers, so they will start automatically. In addition, Docker also dev domain names and the correct container write / etc / hosts file. Because this operation if other people do not need any configuration, so a lot easier. We can use a specific version of redis and mysql. Mysq loaded in the virtual machine's directory so that the data can be done in this expansion vessel lasting preservation.

Each project has a custom application-level configuration. This configuration file needs to run Dockerfile, other services running in the docker-compose.yum file. For example let's start a new project, use the dev command Dash:

git clone [GIT_URL]

dev up # (simply an alias to docker-compose up -d)

 

Part 4: Inter Services Communications & Web Browser

You might think, how to see the results compiled? This is not the local environment. Quite right, I have not talked about how to request access to your container.

If you have been to develop the operating system instead of through a virtual machine, you may often use http: // localhost: 8000 to access your APP. And in Docker Docker Machine, and now are two entirely different abstraction.

Then in the event of exchange, increasing the concept of separation. Container isolated each of our services, we can better understand what is the real service.

In order to achieve the same local environment and simple level, we must also do something about the job.

Like I have said in Part1, like a small dns server environment, do not include all requests dnsmasq, Dev TLD have been routed to the Docker Machine VM. We just need to how these requests can be routed to the correct container. Unfortunately, the default network interface and the container is not exposed, IP VM interior of the container is randomly assigned. You can bind a host port to a port on the container, but must carefully handle the conflict between.

This is Jason Wilder of Nginx reverse proxy container was talking about. It uses the principle of internal concern Docker container switch, and then dynamically configured through nginx reverse proxy. It is also bound by the 80 ports VM. A new container contains VIRTUAL_HOST environment variable can be a lot of traffic routed to the vessel. Since both can run in real time, we can easily add two lines of code into the Docker-compose.yml in:

web:

...

environment:

- VIRTUAL_HOST = myapp.dev # For nginx proxy


Stop the container (dev stop the web), remove the container (dev delete the web), and then start all backup (dev environment). This is another centralized environment need to convert Thinking example. First, I stop the service, and then change the new environment variable, reopen. Because the new environment variables to the new vessel, the best way is to remove the container and start again.

Reverse proxy also solved the problem of service development: how to define I'm developing two dependencies between services?

The two different services defined in two different containers docker-compose.yml can make him become more complex. Every time they build one other vessel, it will depend on the cycle refers to another vessel to enter the nightmare. However, using our dnsmasq containers, each request is routed to the dev nginx. As long as your service in dev TLD and registered under the virtual machine, then each request can request all of the links. We have our own developer portal, developers can set ifttt.dev identity, so that you can request access ifttt.dev internal procedures. If you access the program is not running, nginx returns 503.

Part5: using the package management

Production code for it, Dockerfile step by step to install the associated package that makes sense. For Ruby project, we are using Bundler do. We can create a mirror bundler runtime installed, but you must ensure that the package exists under circumstances can run successfully. Bundler is not necessary but the whole process of packing.

However, in the development environment is different, if every time we install a complete Bundler added an own gem, it will become very fragile. Even worse is if you are not included in the Dockerfile, you will have to restart each time a new container! Fortunately, we have a better way to solve this problem:

web:

...

volumes_from:

- Bundler-cache

...

bundler-cache:

image: ruby: 2.2

command: / bin / true

volumes:

- / Usr / local / bundle


By creating another mirror with Ruby based services as part of our application (using Dockerfile defined), we can use some of the Docker something deeper inside. bundler-cache container defines a gem install to use the storage capacity of your system path, a run. Even if the container is not active, we can also be mounted from the bundler-cache on the storage containers. If you delete the web container, we can still keep this bundler-cache container, waiting for the next time you re-create the container, mounted directly after the storage capacity of all gem existed. If you want to clear the cache and reboot, it is easy to develop bundler-cache can be deleted directly.

For each project using the package management model, we find that it is a very quick and easy to install management approach. But the biggest problem is that if you delete the outside bundler-cache, you must reinstall all gem.

to sum up

Containerized and Docker infrastructure layer is a very good tool. If you plan to transfer to a container after, I highly recommend that you first try in local development environment. Since the beginning of the deployment inside Dash, we see new development staff management time ranging from days to hours. We can do the migration (including the actual change Dash itself) within a week, our engineers have begun to make its own contribution to this.
     
         
       
         
  More:      
 
- Using Linux / Unix Text processing (Linux)
- Android development, may cause a memory leak problem (Programming)
- Help you make Git Bisect (Linux)
- Installation and configuration of Hadoop under Linux (Server)
- ntop monitoring software configuration and installation (Linux)
- How to turn Java String into Date (Programming)
- MySQL Error Code Complete (Database)
- How to display a dialog Bash Shell script (Programming)
- In Spring AOP example explanation (Programming)
- Volatile keyword in C language understanding (Programming)
- Ubuntu 14.04 LTS installed Hadoop 1.2.1 (distributed cluster mode) (Server)
- Wildcards and special symbols usage comments under Linux (Linux)
- Python interview must look at 15 questions (Programming)
- Redis configuration file interpretation (Database)
- Use Redis as time-series database: why and how (Database)
- Getting Started with Linux system to learn: how to get the process ID (PID) in the script (Linux)
- System-level alias vs Oracle ADR functionality (Database)
- Use the DBMS_SCHEDULER package to manage scheduled tasks (Database)
- How to view the Linux program or process used in the library (Linux)
- Linux installation notes under GAMIT (Linux)
     
           
     
  CopyRight 2002-2016 newfreesoft.com, All Rights Reserved.