Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Linux \ RabbitMQ Getting Started Tutorial     - Nodejs command-line program development tutorial (Programming)

- Linux disk partition, format, mount the directory (Linux)

- Spring WebSocket Comments (Programming)

- CentOS install Java 1.8 (Linux)

- Commentary Apache + Tomcat + JK implement Tomcat clustering and load (Server)

- shell script: a key to install LAMP, LNMP script (Server)

- Linux Defensive / mitigate DDOS attacks (Linux)

- How to use static, class, abstract method in Python (Programming)

- Linux installed xdotool simulate keystrokes and mouse movements (Linux)

- Realization of Linux operating system illegal IP censorship (Linux)

- On event processing browser compatibility notes (Programming)

- To install and use the Doxygen under Linux (Linux)

- Linux shared libraries .so file name and Dynamic Link (Linux)

- Shell programming entry (Programming)

- Getting jQuery - progress bar (Programming)

- Linux data recovery software efficiently practical application extundelete (Linux)

- Get the Linux device PCI ID method (Linux)

- C language macro definition #define Usage (Programming)

- Ubuntu users install the video driver Nvidia Driver 334.21 (Linux)

- How to override the plain text files and directories soft connection in linux (Linux)

 
         
  RabbitMQ Getting Started Tutorial
     
  Add Date : 2017-04-13      
         
         
         
  RabbitMQ declared earlier this article are the official translation of the guidelines, since I limited it is inevitable that improper translation of place, if found wrong, please contact my good promptly corrected. Well, the text began:

RabbitMQ is a message broker. This is the main principle is very simple, that is by accepting and forwarding messages. You can think of it as the post office: When you a package to the post office, do you believe Mr. postman delivered the mail will eventually hand-over people. RabbitMQ is like a mailbox, post office or a postman.

Post office and two main difference is that RabbitMQ, RabbitMQ does not process files, but to accept and store and forward messages in binary form.

RabbitMQ, during transmission of the message, we use some standard names.

Production process is like sending process, sending a message that a program producer, we use the "P" to describe it.

Call queue is like mailbox, which is located inside RabbitMQ, although RabbitMQ message flow through your application, but they are only stored in the queue. A queue does not limit the scope, you can think how much storage on how much memory, it is the essence of up infinite cache. Multiple producers can send messages through a queue, the same gang multiple consumers can also receive messages in a message queue. A queue is painted in such a way, the name on top of it

Consumption process and received similar consumer is usually a wait for a message to accept the procedure, we use "C" to describe

Note that producers, consumers and agents on a machine does not have to, in fact, most of the applications, they are not in a machine.

"Hello World"

(Using the java client)

In this part of the tutorial, we are going to use java to write two programs; sending a simple message and a received message producers and consumers out of the output. We overlooked some of the details of the Java API, to begin only featured on this simple thing, which is a "Hello World" message.

Java Client Library
RabbitMQ AMQP protocol to follow, it is an open and common messaging protocol. There are several AMQP client in a different language, we use the Java client provided by the RabbitMQ.
Download the client library package, verifying signatures, and unzip it to your working directory, extract the JAR files to extract from the path:

$ Unzip rabbitmq-java-client-bin -. * Zip
$ Cp rabbitmq-java-client-bin -. * / * Jar ./
(RabbitMQ Java client there Maven central repository, groupId is com.rabbitmq, artifactId is amqp-client.)

Now we have a Java client and dependencies, we can write some code.

send

We will make our message sender to send a message, the recipient receives our message. The sender is connected to the RabbitMQ, sends a simple message, then exits.

In Send.java, we need to introduce some of the categories:

import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
Establishment of this class, the queue name:

public class Send {

  private final static String QUEUE_NAME = "hello";

  public static void main (String [] argv)
      throws java.io.IOException {
      ...
  }
}
Next, we create a connection to the server:

    ConnectionFactory factory = new ConnectionFactory ();
    factory.setHost ( "localhost");
    Connection connection = factory.newConnection ();
    Channel channel = connection.createChannel ();
Abstract socket connection, attention processing and authorization protocol version, that sort of thing.
Here we connect to the proxy on the local machine, so it is localhost. If we want to connect to the broker on different machines, it should be noted that only the host name and IP address.

Next we create a channel to get the majority of API operations are located on this.

For sending, we must declare a send queue, and then we send the message to the queue:

channel.queueDeclare (QUEUE_NAME, false, false, false, null);

String message = "Hello World!";
channel.basicPublish ( "", QUEUE_NAME, null, message.getBytes ());
System.out.println ( "[x] Sent '" + message + "'");
Declaring a queue is idempotent - it will only be created if it does not exist already The message content is a byte array, so you can encode whatever you like there..

Lastly, we close the channel and the connection;
Declare a queue is idempotent, if only to be declared in the queue does not exist is created. The message content is binary array, so you can encode as you like.

channel.close ();

connection.close ();
Here's the whole Send.java class.

Sent did not work

If you are the first to use RabbitMQ and you do not see the "Sent" message, you may be scratching think in the end, where is the problem. May not be enough space at broker startup (by default it takes at least 1Gb of space), refused to accept the message. To identify the problem by checking the Agent log file, if necessary, reduce the size limit. Document profile will tell you how to set disk_free_limit.
 
receive

The above code is to build our sender. Our receivers are extracted from the RabbitMQ message, so unlike the sender that sends a simple message, we need to be running and listening for messages output message.

In the Recv.java code is almost identical with the Send Quote:

import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.QueueingConsumer;

This extra QueueingConsumer class is used to cache the message is sent from the server.

Created with the same sender, we open a connection and a passage stating that we need to consume a queue. Note To match the transmission queue.

             java.lang.InterruptedException {

  private final static String QUEUE_NAME = "hello";

  public static void main (String [] argv)
      throws java.io.IOException,
             java.lang.InterruptedException {

    ConnectionFactory factory = new ConnectionFactory ();
    factory.setHost ( "localhost");
    Connection connection = factory.newConnection ();
    Channel channel = connection.createChannel ();

    channel.queueDeclare (QUEUE_NAME, false, false, false, null);
    System.out.println ( "[*] Waiting for messages To exit press CTRL + C.");
    ...
    }
}
Note that here we also declares a queue. That we may start the receiver before the sender, before we get messages from the queue that we want to determine whether the real.
By this we inform the server queue to send us a message. So the server asynchronously push message to us, where we provide a callback object to cache messages, until we are ready to use them again. This is QueueingConsumer things done.

QueueingConsumer consumer = new QueueingConsumer (channel);
channel.basicConsume (QUEUE_NAME, true, consumer);

while (true) {
  QueueingConsumer.Delivery delivery = consumer.nextDelivery ();
  String message = new String (delivery.getBody ());
  System.out.println ( "[x] Received '" + message + "'");
}
QueueingConsumer.nextDelivery () before another message from the server soon it would have been choked up.

This is the entire Recv.java class.

All together

You can compile these files in the class path RabbitMQ Java client:

$ Javac -cp rabbitmq-client.jar Send.java Recv.java
To run them, you need rabbitma-client.jar and it depends on the file in the classpath. On a terminal, run the sender:

$ Java -cp:. Commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar Send
Then, run the recipient:

$ Java -cp:. Commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar Recv
In windows environment, we use a semicolon instead of a colon to separate options class path.

The recipient will get output from the RabbitMQ message from a sender. Recipient will keep running, waiting for messages (using Ctrl-C to stop), so try to run the sender with another terminal.
If you want to test the queue, try using rabbitmqctl list_queues0.

Hello World!

Time to move to the second part, build a simple work queue.

Prompt
In order to save your entries, you can set the class path to the environment variable

\ $ Export CP = .: commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar
\ $ Java -cp $ CP Send
Or in a Windows environment:

\> Set CP = .; commons-io-1.2.jar; commons-cli-1.1.jar; rabbitmq-client.jar
\> Java -cp% CP% Send

Work Queue

(Using the Java Client)

In this first part of the guide, we wrote by the same name of the queue to send and receive messages. In this section, we will create a work queue, using the distributed time tasks among multiple workers.
Work queue (also known as: task queue) behind the main idea is to avoid dealing with a resource-intensive task immediately and had been waiting to complete. Instead, we can schedule the task to make subsequent execution. Our task will be packaged into a message sent to the queue. A worker process runs in the background, and finally get the task to perform tasks. When you run multiple workers, all the tasks they will be shared.

In a web application, this idea is particularly useful, you can not handle a complex task in a short-lived http request.

ready

In the previous tutorial, we sent a contains "Hello World!" Message. Now we are going to send some string, used to represent complex tasks. We do not have a real task, such as image resizing or pdf file rendering, so we Thread.sleep () function, a disguise that we are very busy scene. We will put the number of strings to represent the midpoint of its complexity; a point to spend every second of work. For example, using a fake task descriptions Hello ... send three seconds.

We will modify our previous lightweight case Send.java code to allow any message can be issued through the command line. This program will be scheduling tasks to our work queue, so we named it NewTask.java:

String message = getMessage (argv);
channel.basicPublish ( "", "hello", null, message.getBytes ());
System.out.println ( "[x] Sent '" + message + "'");
Get some help from the command line message parameters:

private static String getMessage (String [] strings) {
    if (strings.length <1)
        return "Hello World!";
    return joinStrings (strings, "");
}

private static String joinStrings (String [] strings, String delimiter) {
    int length = strings.length;
    if (length == 0) return "";
    StringBuilder words = new StringBuilder (strings [0]);
    for (int i = 1; i         words.append (delimiter) .append (strings [i]);
    }
    return words.toString ();
}
Our old Recv.java procedures also require some changes: it is necessary to disguise the message body for each point to second. Get the message from the queue, run the task, so we will call it Worker.java:

while (true) {
    QueueingConsumer.Delivery delivery = consumer.nextDelivery ();
    String message = new String (delivery.getBody ());

    System.out.println ( "[x] Received '" + message + "'");
    doWork (message);
    System.out.println ( "[x] Done");
}
Our task camouflage posing execution time:

private static void doWork (String task) throws InterruptedException {
    for (char ch: task.toCharArray ()) {
        if ( '.' ch ==) Thread.sleep (1000);
    }
}
Compilation did in the first part of their guidelines (jar files need to work on the path):

$ Javac -cp rabbitmq-client.jar NewTask.java Worker.java
 
Dispatched cycle

One of the advantages is that we use task queues are easily processed in parallel. If we are dealing with some of the bulk of the file, we only need to add more workers in this way we are easily extensible.
First, let's try to run at the same time two instances of workers. They will get the message from the queue, but specifically how to do it? Let's take a look.
You need three open control platform, which runs two programs for workers. They will be our two consumer -C1 and C2.

shell1 $ java -cp:. commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar
Worker
 [*] Waiting for messages. To exit press CTRL + C
shell2 $ java -cp:. commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar
Worker
 [*] Waiting for messages. To exit press CTRL + C
In this third control platform used to publish our new task. Once you start the consumer, you can publish the news:

shell3 $ java -cp:. commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar
NewTask First message.
shell3 $ java -cp:. commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar
NewTask Second message ..
shell3 $ java -cp:. commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar
NewTask Third message ...
shell3 $ java -cp:. commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar
NewTask Fourth message ....
shell3 $ java -cp:. commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar
NewTask Fifth message .....
Let's look at what is being delivered to our workers there:

shell1 $ java -cp:. commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar
Worker
[*] Waiting for messages. To exit press CTRL + C
[X] Received 'First message.'
[X] Received 'Third message ...'
[X] Received 'Fifth message .....'
shell2 $ java -cp:. commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar
Worker
 [*] Waiting for messages. To exit press CTRL + C
 [X] Received 'Second message ..'
 [X] Received 'Fourth message ....'
By default want, RabbitMQ will each send a message to the next consumer. The number of messages on average every consumer get is the same. This is known as a distributed message polling. Try three or more workers.

Message acknowledgment

Processing task may take a few seconds, you might wonder if a consumer beginning a long task, and in the case of partial completion of the processing is dead what happens. For our current code, once RabbitMQ passes the message to the consumer, it will immediately delete the message from memory. In this case, if you kill a worker you are dealing with it will lose the message being processed. We also lost the message has been assigned to the worker and not the beginning of treatment.
But we do not want to lose any task, if a worker dies, we expect to be passed to another task workers.
In order to ensure that each message is not lost, RabbitMQ supports message acknowledgment mechanism. A confirmation message is issued by the consumer, tell RabbitMQ this message has been accepted, the process is completed, RabbitMQ can delete it.
If a consumer does not send a confirmation signal, RabbitMQ will identify this message is not completely processed successfully, will pass it on to another consumer. In this way, even if the worker sometimes die, you can still ensure that no messages will be lost.
Here there is no message timeout; RabbitMQ workers will die before connecting again pass this message. Even if a message is to be processed for a long long time, is not a problem.
By default, message confirmation mechanism is open. In the previous example we are clear this feature off no_ack = True. It is time to remove the logo, once we have completed a task, workers need to send a confirmation signal.

QueueingConsumer consumer = new QueueingConsumer (channel);
boolean autoAck = false;
channel.basicConsume ( "hello", autoAck, consumer);

while (true) {
  QueueingConsumer.Delivery delivery = consumer.nextDelivery ();
  // ...
  channel.basicAck (delivery.getEnvelope () getDeliveryTag (), false.);
}
Use this code, we can guarantee that you will be a worker even if the message is being processed by the CTRL + C to terminate it run, still no message will be lost. Later, after the death of workers did not send a confirmation message will be redelivered.

Forget confirmed

This is a common mistake is to forget to confirm. This is a very simple mistake, but the consequences are serious. When your client exits, message redelivery (appears to be random delivery), RabbitMQ will take up more and more memory, which was not sent because it does not confirm the news release.

In order to debug this type of error, you can use rabbitmqctl print out messages_unacknowledged properties:

$ Sudo rabbitmqctl list_queues name messages_ready messages_unacknowledged
Listing queues ...
hello 0 0
... Done.
 
Message persistence

We've learned how to have died in determining whether consumers, and to ensure that the task is not lost. But if RabbitMQ server stops, our task will still be lost.

When RabbitMQ quit or crash, it will forget the queues and messages, unless you tell it not to do so. Two things need to be done to ensure that messages are not lost: we mark the message queue and persistence.

First, we need to make sure we do not lose RabbitMQ queue, in order to do so, we need to declare it as persistence:

boolean durable = true;
channel.queueDeclare ( "hello", durable, false, false, null);
Although this command is correct, but it does not immediately run our program. That is because we have defined a non-persistent queue hello. RabbitMQ is not allowed to redefine an existing queue you use different parameters, if you try to do that it will return an error. There is a quick workaround - let's declare a different queue names, such as task_queue:

boolean durable = true;
channel.queueDeclare ( "task_queue", durable, false, false, null);
This queuqDeclare changes need to be applied in producer and consumer code.
At this point, we can ensure that even RabbitMQ restart, task_queue queue will not be lost. Now we need to mark the message persistence - by setting MessageProperties (realized BasicProperties) value PERSISTENT_TEXT_PLAIN.

import com.rabbitmq.client.MessageProperties;

channel.basicPublish ( "", "task_queue",
            MessageProperties.PERSISTENT_TEXT_PLAIN,
            message.getBytes ());
Note that the message persistence
Mark message persistence is not guaranteed and messages will not be lost, although this will tell RabbitMQ message saved to the hard disk. But for RabbitMQ still have a short window of time for receiving a message and save is not yet complete. Similarly, RabbitMQ can not let each message synchronization - it might just stored in the cache, there is no real written to the hard disk. This is not a guarantee of lasting robust, but our simple task queue is sufficient. If you need a more robust persistence guarantee that you can use the publisher confirmed.

Equitable distribution

You may have noticed that the distribution process and not, as we want to operate that. For example, in a situation where there are two workers, if the message is all odd and all even weight is light, a worker would have been busy down, while the other will do almost anything Well, RabbitMQ will not concerned about the matter, it would have been a uniform distribution message.
This happens only because RabbitMQ distribution message to the queue. It does not care how much of a message acknowledgment signal is not sent by the sender. It just blindly send a message N to N consumers.

To solve this problem, we can use basicQos methods set prefetchCount = 1. This will inform RabbitMQ Do not give a worker more than one task, or in other words in a worker process is completed, do not give it to distribute a new message before sending an acknowledgment. Instead the message distribution to the next is not busy worker.

int prefetchCount = 1;
channel.basicQos (prefetchCount);
Note that the queue size

If all of your workers are busy, your queue will be filled. You will want to focus on this matter, you may want to add more workers, or some other strategies.

Put them together

Our NewTask.java final code:

import java.io.IOException;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.MessageProperties;

public class NewTask {

  private static final String TASK_QUEUE_NAME = "task_queue";

  public static void main (String [] argv)
                      throws java.io.IOException {

    ConnectionFactory factory = new ConnectionFactory ();
    factory.setHost ( "localhost");
    Connection connection = factory.newConnection ();
    Channel channel = connection.createChannel ();

channel.queueDeclare (TASK_QUEUE_NAME, true, false, false, null);

    String message = getMessage (argv);

    channel.basicPublish ( "", TASK_QUEUE_NAME,
            MessageProperties.PERSISTENT_TEXT_PLAIN,
            message.getBytes ());
    System.out.println ( "[x] Sent '" + message + "'");

    channel.close ();
    connection.close ();
  }
  // ...
}
(NewTask.java source)
Our Worker.java Code:

                      java.lang.InterruptedException {
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.QueueingConsumer;

public class Worker {

  private static final String TASK_QUEUE_NAME = "task_queue";

  public static void main (String [] argv)
                      throws java.io.IOException,
                      java.lang.InterruptedException {

    ConnectionFactory factory = new ConnectionFactory ();
    factory.setHost ( "localhost");
    Connection connection = factory.newConnection ();
    Channel channel = connection.createChannel ();

 channel.queueDeclare (TASK_QUEUE_NAME, true, false, false, null);
    System.out.println ( "[*] Waiting for messages To exit press CTRL + C.");

    channel.basicQos (1);

    QueueingConsumer consumer = new QueueingConsumer (channel);
    channel.basicConsume (TASK_QUEUE_NAME, false, consumer);

    while (true) {
      QueueingConsumer.Delivery delivery = consumer.nextDelivery ();
      String message = new String (delivery.getBody ());

      System.out.println ( "[x] Received '" + message + "'");
      doWork (message);
      System.out.println ( "[x] Done");

      channel.basicAck (delivery.getEnvelope () getDeliveryTag (), false.);
    }
  }
  // ...
}
(Worker.java source)
Use the number of message acknowledgment and pre-reading you can create a work queue. Persistence option makes RabbitMQ restart after the task is still there.

To learn more about the methods and channel message attributes, you can browse the javadocs online.

Now we can move the guide 3, and learn how to apply the same message to multiple consumers

Publish and subscribe

(Using the java client)

In the previous tutorial, we create a work queue. This work is a virtual queue behind each task are accurately transmitted to the workers. In this section we will do something completely different things - we will deliver a message to multiple consumers. This part known as a "publish and subscribe."

To illustrate this section, we will create a simple journal German system, which is composed of two programs - the first log message is sent, the second receiving and printing them.

In our logging system, each running a copy of the program recipients will receive information. Through this way we can run a receiver, direct the logs to the hard disk; at the same time we can run another recipient, in the screen watching the logs.
In essence, equivalent to release the log message is broadcast to all receivers.

exchange

In previous guidelines section, we will send a message to the queue and receive messages from the queue. Now is the time to introduce the whole RabbitMQ messaging models.
Let's quickly review under the previous guidelines in our possession:

Send a message producer is a user program.
A message queue is a storage buffer.
Receiving a message consumer is a user program.
In the RabbitMQ message model core idea it is that the producers never directly send messages to the queue. In fact, producers often do not even know whether a message is delivered to the queue.

Instead, producers can only send messages to an exchange. An exchange is a very simple thing. In it again, it receives a message from the producers, on the other side will push the message queue. This exchange must clearly know how it receives the message to be processed. Whether to attach it to a particular queue? Whether to attach it to multiple queues? Or whether it should be discarded. Define the rules is determined by the type of exchange.

There are several types of exchange: direct, topic, deaders, fanout. Let's focus on the last -fanout. Let's create a this type of exchange and call it the logs:

channel.exchangeDeclare ( "logs", "fanout");
This fanout Exchange is very simple. With this name you might have guessed its usefulness, and it will receive all messages are broadcast to all it knows about all the queues. This is true of our recorder required.

Exchange List
To list all of the Exchange server, you can run a useful rabbitmqctl:

$ Sudo rabbitmqctl list_exchanges
Listing exchanges ...
        direct
amq.direct direct
amq.fanout fanout
amq.headers headers
amq.match headers
amq.rabbitmq.log topic
amq.rabbitmq.trace topic
amq.topic topic
logs fanout
... Done.
In this list there are some to amq. Exchange starts and the default (unnamed) exchange. These are created by default, but it's not likely you will use them at some point.
Anonymous Exchange
In the previous tutorial we have no understanding of the Exchange, but we are still able to send messages to the queue. It is possible, because we are using the default Exchange through our empty string ( "") to identify it.
Recall how we used to send a message:

channel.basicPublish ( "", "hello", null, message.getBytes ());
This first parameter is the name of the exchange. Empty string indicates that it is the default or anonymous exchange: routing keywords exist, message routing Routing Keywords by name to a particular queue.

Now we can publish our own naming Exchange:

channel.basicPublish ( "logs", "", null, message.getBytes ());
 
Temporary queue

You may recall that we previously used in the queue is a specific name (remember hello and task_queue). Naming a queue is crucial for us - we need to specify that workers on the same queue. When you want to share the queue to producers and consumers, to the queue name it is important.
But that is not an instance of our loggers. We want to monitor all log messages, not just a subset of them. We also are interested in the current news flow, instead of the old. To solve this we need two things.
First of all, whenever we connect RabbitMQ, we need a new, empty queue. To do this, we can create a random name of the queue or even better - let the server pick a random name for us.
The second part, once we disconnect the consumer, the queue should be automatically deleted.
In the Java client, when we use called with no arguments queueDeclare () method, we create an automatically generated name, not persistent, exclusive, automatically delete queues.

String queueName = channel.queueDeclare () getQueue ().;
At this point, the queue name contains a random queue name. Such as name like amq.gen-JzTY20BRgKO-HjmUJj0wLg.

Binding

We've created a fanout exchanges and queues. Now we need to tell Exchange to send a message to our queue. Relations between the exchanges and queues This is called a binding.

channel.queueBind (queueName, "logs", "");
From now on, the exchange log message will be attached to our queue.

Bind List
You can use the listed bindings, use rabbitmqctl list_bindings.

All together

This program sends log messages producers, with the guidelines of the program is not much different from before. This is most important change is that we will replace the anonymous exchange for what we want to publish the message log Exchange. When the transmission is that we need to apply a routing keyword, but its value is broadcast message will be ignored. This is the code EmitLog.java program:

import java.io.IOException;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;

public class EmitLog {

    private static final String EXCHANGE_NAME = "logs";

    public static void main (String [] argv)
                  throws java.io.IOException {

        ConnectionFactory factory = new ConnectionFactory ();
        factory.setHost ( "localhost");
        Connection connection = factory.newConnection ();
        Channel channel = connection.createChannel ();

channel.exchangeDeclare (EXCHANGE_NAME, "fanout");

        String message = getMessage (argv);

        channel.basicPublish (EXCHANGE_NAME, "", null, message.getBytes ());
        System.out.println ( "[x] Sent '" + message + "'");

        channel.close ();
        connection.close ();
    }
    // ...
}
(EmitLog.java source)
As you know, we declare an exchange after the connection is established. This step is necessary because the publication to a non-existent Exchange is prohibited.

If the queue has not been bound to the Stock Exchange, the message will be lost, but this is ok for us; and if consumers are not listening, we can safely discard the message.
ReceiveLogs.java Code:

                  java.lang.InterruptedException {
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.QueueingConsumer;

public class ReceiveLogs {

    private static final String EXCHANGE_NAME = "logs";

    public static void main (String [] argv)
                  throws java.io.IOException,
                  java.lang.InterruptedException {

        ConnectionFactory factory = new ConnectionFactory ();
        factory.setHost ( "localhost");
        Connection connection = factory.newConnection ();
        Channel channel = connection.createChannel ();

        channel.exchangeDeclare (EXCHANGE_NAME, "fanout");
        String queueName = channel.queueDeclare () getQueue ().;
        channel.queueBind (queueName, EXCHANGE_NAME, "");

        System.out.println ( "[*] Waiting for messages To exit press CTRL + C.");

        QueueingConsumer consumer = new QueueingConsumer (channel);
        channel.basicConsume (queueName, true, consumer);

        while (true) {
            QueueingConsumer.Delivery delivery = consumer.nextDelivery ();
            String message = new String (delivery.getBody ());

            System.out.println ( "[x] Received '" + message + "'");
        }
    }
}
(ReceiveLogs.java source)
As previously compiled, we have done.

$ Javac -cp rabbitmq-client.jar EmitLog.java ReceiveLogs.java
If you want to save the log to a file, only to open a control platform, type:

$ Java -cp:. Commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar ReceiveLogs> logs_from_rabbit.log
If you want your screen point of view these logs, a new terminal and run:

$ Java -cp:. Commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar ReceiveLogs
Of course, in order to issue a log, type:

$ Java -cp:. Commons-io-1.2.jar: commons-cli-1.1.jar: rabbitmq-client.jar EmitLog
Use rabbitmactl list_bindings you can verify that the code does create binding and we want to queue. With two ReceiveLogs.java run the program as you can see some:

 $ Sudo rabbitmqctl list_bindings
Listing bindings ...
logs exchange amq.gen-JzTY20BRgKO-HjmUJj0wLg queue []
logs exchange amq.gen-vso0PVvyiRIL2WoV3i48Yg queue []
... Done.
This straightforward interpretation of the results is simple: two server queue journal flows from exchange arrangements in. And that is indeed what we expect.
To find out how to monitor a subset of messages, let's move to guide the fourth part.

routing

(Using the Java Client)

In the previous tutorial, we created a simple de logging system. We can broadcast our log information to multiple recipients.
In this part of the tutorial, we are going to add a function - subscribe to a message so that only the sub-integration possible. For example, we can directly point to a critical error messages to a log file (saved in love hard disk space), and still be able to print all log information to the internet.

Binding

In the previous example we have created binding. You can look back at the code:

channel.queueBind (queueName, EXCHANGE_NAME, "");
A binding is a relationship between an exchange and a queue This can be simply read as:. The queue is interested in messages from this exchange.
A binding is an exchange and a relationship between the queue. This is easy to understand: the queue is interested in the message exchange.

Binding can bring additional routing keyword arguments. In order to eliminate confusion basic_publish parameters, we'll call it bind keyword. Here is how we create a binding by one keyword:

channel.queueBind (queueName, EXCHANGE_NAME, "black");
This key binding meaning depends on the type of exchanges. This fanout exchange, that we used before, just ignore its value.

Direct exchange

Our current system will log all messages broadcast to all consumers. We want to extend it, let permit in accordance with its strict rules to filter messages. For example, we might want a written log messages to the hard disk of the program receives only critical error, instead of hard disk space wasted on log messages and warning messages.
We use fanout exchange type, that will not give us a lot of flexibility - it can only broadcast competent mindless.

We can use the direct types of exchanges instead. Behind a direct exchange routing algorithm is simple - those will enter a message queue bindings keywords and keyword matching routing message queue.

To illustrate that, consider the following structure:

In this structure, we saw this type of direct exchange bound to two queues. A first queue containing orange bind keyword, which has two second binding, a binding key is black and the other is green key.
In this structure, in a message sent to the exchange, where the message Dailu keyword orange will be routed to the queue Q1, the message Dailu keyword black or green will be routed to the queue Q2. All other types of messages will be discarded.

Multiple bind

The arrival of a bind key bind a queue is very legitimate. Use the keyword black binding in our example, the X and Q1 bound together. In that case, this type of direct exchange and fanout similar types, which will also broadcast a message to all eligible queue. A routing key bit balck keywords will be transferred to the Q1 and Q2.

Issue log

We will use this model for our logging system. Use direct type instead fanout exchange type, send a message. Because of this we can strict routing keywords records. In this way the receiving program can receive it strictly want to receive. Let us focus first release log.
In short, we first need to create an exchange.

channel.exchangeDeclare (EXCHANGE_NAME, "direct");
We are ready to send a message:

channel.basicPublish (EXCHANGE_NAME, severity, null, message.getBytes ());
To simplify things, we guarantee that this severity is inof, warning, error one.

subscription

As previously working as receiving messages, with one exception, we will each severity we are interested in creating a new binding.

String queueName = channel.queueDeclare () getQueue ().;

for (String severity: argv) {
  channel.queueBind (queueName, EXCHANGE_NAME, severity);
}
 
Put them together

EmitLogDirect.java category code:

public class EmitLogDirect {

    private static final String EXCHANGE_NAME = "direct_logs";

    public static void main (String [] argv)
                  throws java.io.IOException {

        ConnectionFactory factory = new ConnectionFactory ();
        factory.setHost ( "localhost");
        Connection connection = factory.newConnection ();
        Channel channel = connection.createChannel ();

        channel.exchangeDeclare (EXCHANGE_NAME, "direct");

        String severity = getSeverity (argv);
        String message = getMessage (argv);

        channel.basicPublish (EXCHANGE_NAME, severity, null, message.getBytes ());
        System.out.println ( "[x] Sent '" + severity + "': '" + message + "'");

        channel.close ();
        connection.close ();
    }
    // ..
}
ReceiveLogsDirect.java category code:

                  java.lang.InterruptedException {

    private static final String EXCHANGE_NAME = "direct_logs";

    public static void main (String [] argv)
                  throws java.io.IOException,
                  java.lang.InterruptedException {

        ConnectionFactory factory = new ConnectionFactory ();
        factory.setHost ( "localhost");
        Connection connection = factory.newConnection ();
        Channel channel = connection.createChannel ();

        channel.exchangeDeclare (EXCHANGE_NAME, "direct");
        String queueName = channel.queueDeclare () getQueue ().;

        if (argv.length <1) {
            System.err.println ( "Usage: ReceiveLogsDirect [info] [warning] [error]");
            System.exit (1);
        }

        for (String severity: argv) {
            channel.queueBind (queueName, EXCHANGE_NAME, severity);
        }

        System.out.println ( "[*] Waiting for messages To exit press CTRL + C.");

        QueueingConsumer consumer = new QueueingConsumer (channel);
        channel.basicConsume (queueName, true, consumer);

        while (true) {
            QueueingConsumer.Delivery delivery = consumer.nextDelivery ();
            String message = new String (delivery.getBody ());
            String routingKey = delivery.getEnvelope () getRoutingKey ().;

            System.out.println ( "[x] Received '" + routingKey + "': '" + message + "'");
        }
    }
}
As usual, as the compiler (see guidelines recommend the first part, compiling, and class path). For convenience, when we are running instance, we now use an environment variable $ CP (in the windows environment% CP%) represents the classpath.
If you want to save only warning and error record does not contain info record information to a file, open a control platform and enter:

$ Java -cp $ CP ReceiveLogsDirect warning error> logs_from_rabbit.log
If you want to watch your screen all the log information, open a new terminal and type:

$ Java -cp $ CP ReceiveLogsDirect info warning error
 [*] Waiting for logs. To exit press CTRL + C
For example, to publish an error log message, only need to type:

$ Java -cp $ CP EmitLogDirect error "Run. Run. Or it will explode."
 [X] Sent 'error': 'Run Run Or it will explode...'
EmitLogDirect.java source and ReceiveLogsDirect.java source of all the source code.

Reading Guide fifth part, based on a model to see how to listen for messages.

Theme (topic)

(Using the Java Client)

In the previous tutorial we improved our logging system. Instead of using a fanout exchange type, the ability to achieve only dumb broadcast, we use a type of direct exchange, get a can selectively receive log.

Although the use of the type of direct exchanges have improved our system, but it still has limitations - it can not be routed based on multiple criteria.

Our log system, we might want to subscribe not only based on strict log, based on the same release log source. You may know the concept of syslog unix tool, that based on strict (info / warn / crit ...) and smart (auth / cron / kern ...) routing logs.

That will give us a lot of flexibility - we might just want to monitor all critical errors and logs from the kern comes from the cron.

To achieve that in our logging system, we need to learn a more complex topic type exchanges.

topic Exchange Type

Send to topic can not have any type of Exchange routing keywords - it must be a list of keywords, separated by dots. This keyword can be arbitrary, but you can usually explain basic contact message. Several examples of legitimate routing keywords: "stock.usd.nyse", "nyse.vmw", "quick.orange.rabbit". There may be many routing keywords that you want, the upper limit is 255 bytes.
This keyword must also bind it in the same form. topic Exchange logic behind is similar to the type and direct exchanges - with a special routing key message will be delivered to all matching keyword bound queues. But there are two particularly important binding keywords.

> * (Star) can replace any one of the word.
> # (Hash) can replace zero or more words.

This case is very easy to explain:

In this case, the message we are sending is described in animals. Transmitted message routing keywords are three words (two points) components. Routing keyword first word describes the speed, the second describes the color, and the third is species:
"< Speed > < Color > < species >."

We created three bindings: Q1s bound by keyword ".orange." Constrained, Q2 by the "..rabbit" and "lazy #." Constrained.
These bindings can be summarized as follows:

> Q1 are interested orange color animals.
> Q2 For information about animal rabbit all the information and all sluggish.

A routing keyword "quick.orange.rabbit" message will be delivered to all queues. Message "lazy.orange.elephant" also delivered to all queues. On the other hand "quick.orange.fox" only into the first queue, "lazy.brown.fox" only into the second queue. "Lazy.pink.rabbit" just passed on to a second queue, even if it will match two bindings. "Quick.brown.fox" does not match any bind keyword, it will be discarded.

If we break our promise, or send a message with a four word keywords, such as "orange" or "quick.orange.male.rabbit", what happens? Well, these messages do not match any tie set, will be lost.
On the other hand "lazy.orange.male.rabbit", even though it has four words, that will match the last binding, passes the message to the second queue.

topic Exchange Type
topic type Exchange is powerful, it can behave like other exchanges.
Topic exchange is powerful and can behave like other exchanges.
When a queue is bound to the "#" (hash) key bindings - it will receive all messages, regardless of the route is the keyword - like fanout Type Exchange
When the topic is not used in the type of exchanges like "*" (star) and "#" (hash) special character, it behaves like direct type Exchange

 

All together

We will use topic type Exchange log in our system. We assume that our work log message routing keywords are composed of two words, the format is:. "."
This code is almost the same as previously:
EmitLogTopic.java code:

public class EmitLogTopic {

    private static final String EXCHANGE_NAME = "topic_logs";

    public static void main (String [] argv)
                  throws Exception {

        ConnectionFactory factory = new ConnectionFactory ();
        factory.setHost ( "localhost");
        Connection connection = factory.newConnection ();
        Channel channel = connection.createChannel ();

        channel.exchangeDeclare (EXCHANGE_NAME, "topic");

        String routingKey = getRouting (argv);
        String message = getMessage (argv);

        channel.basicPublish (EXCHANGE_NAME, routingKey, null, message.getBytes ());
        System.out.println ( "[x] Sent '" + routingKey + "': '" + message + "'");

        connection.close ();
    }
    // ...
}
ReceiveLogsTopic.java code:

public class ReceiveLogsTopic {

    private static final String EXCHANGE_NAME = "topic_logs";

    public static void main (String [] argv)
                  throws Exception {

        ConnectionFactory factory = new ConnectionFactory ();
        factory.setHost ( "localhost");
        Connection connection = factory.newConnection ();
        Channel channel = connection.createChannel ();

        channel.exchangeDeclare (EXCHANGE_NAME, "topic");
        String queueName = channel.queueDeclare () getQueue ().;

        if (argv.length <1) {
            System.err.println ( "Usage: ReceiveLogsTopic [binding_key] ...");
            System.exit (1);
        }

        for (String bindingKey: argv) {
            channel.queueBind (queueName, EXCHANGE_NAME, bindingKey);
        }

        System.out.println ( "[*] Waiting for messages To exit press CTRL + C.");

        QueueingConsumer consumer = new QueueingConsumer (channel);
        channel.basicConsume (queueName, true, consumer);

        while (true) {
            QueueingConsumer.Delivery delivery = consumer.nextDelivery ();
            String message = new String (delivery.getBody ());
            String routingKey = delivery.getEnvelope () getRoutingKey ().;

            System.out.println ( "[x] Received '" + routingKey + "': '" + message + "'");
        }
    }
}
Run the following example, in the windows environment, use the% CP%, comprising guide one class path.
Receiving all logs:

$ Java -cp $ CP ReceiveLogsTopic "#"
Receive all smart kern log:

$ Java -cp $ CP ReceiveLogsTopic "kern. *"
Or if you simply want to receive 'critical' log:

$ Java -cp $ CP ReceiveLogsTopic "* .critical"
You can create multiple bindings:

$ Java -cp $ CP ReceiveLogsTopic "kern. *" "* .critical"
Issued a routing keyword "kern.critical" log, enter:

$ Java -cp $ CP EmitLogTopic "kern.critical" "A critical kernel error"
Fun with these procedures. Note that the code did not make assumptions for specific routes and bindings keywords, you can operate more than two routes keyword arguments.

Some challenges:

> "" Bind captured routing keywords are empty news?
> "#." Captures the message with the keyword ".." in it? It captures a word keywords?
> What is the difference between "a. *. # And" a. # "?

EmitLogTopic.java and ReceiveLogsTopic.java source code.

Next, let the sixth part of the guide, to ascertain when a remote procedure is called, how to make a round trip of a message.

Remote procedure call (RPC)

(Using the Java Client)

In the second section of the guide, we learned how to use the work queue time-consuming tasks across multiple workers in.

But if we need to call a function of a remote computer, waiting for the result? Well, this is another story. This mode is commonly referred to as remote procedure call or RPC.

In this section, we will use RabbitMQ build an RPC system: a client and a scalable RPC server. Since we do not deserve dispersed consuming tasks, we will create a virtual RPC service that returns Fibonacci (Fibonacci).

The user interface

To illustrate how to use the RPC service, we will create a simple German client class. It exposes a method called call for sending an RPC request until a response has been blocked will reply:

FibonacciRpcClient fibonacciRpc = new FibonacciRpcClient ();
String result = fibonacciRpc.call ( "4");
System.out.println ( "fib (4) is" + result);
RPC aspects of attention
Although RPC in computing terms is a very common pattern, but it is still often subject to criticism.
If a programmer does not realize that the function call is local or a slow RPC. This result is very unpredictable eluded you, and will add unnecessary complexity debugging. And simplify software contrary, misuse RPC causes unmaintainable spaghetti code that (Translator's Note: The original is a spaghetti code mess might describe the code is very long).

Thought suffering, consider the following recommendations:
To ensure a clear distinction between what is a function call is a local call, which is the distal end of the call.

Add to your system documentation, so that dependencies between components clearly visible.

Handle error events. When the RPC server does not respond for a long time, the client should be how to respond?
When all the questions about RPC elimination, in case you can, you should use an asynchronous pipeline, instead of blocking RPC results asynchronously placed next computing platform.

Recycling queue

RPC is generally easy to do in the RabbitMQ. A client sends a request message, the server returns a response message. In order to receive a response, we need to address the request to bring a callback queue. We can use the default queue (the Java client exclusive listing). Let's try:

callbackQueueName = channel.queueDeclare () getQueue ().;

BasicProperties props = new BasicProperties
                            .Builder ()
                            .replyTo (callbackQueueName)
                            .build ();

channel.basicPublish ( "", "rpc_queue", props, message.getBytes ());

// ... Then code to read a response message from the callback_queue ...
Message Properties
This AMQP protocol is predetermined message 14 attributes. Most of them attribute is rarely used, except for the following exceptions:
deliveryMode: a message is marked as persistent (value 2) or transient (other values). You may remember this from the second portion of the property.
contentType: used to describe the type of media encoding. For example JSON encoding is often used, which is a good practice to set this property: application / json.
replyTo: usually named recovered queue name.
correlationId: accelerated response to the RPC request is useful.

We need this new reference:

import com.rabbitmq.client.AMQP.BasicProperties;
Correlation ID (formerly: Correlation Id)

In the current method we recommend for each RPC request to create a recycling queue. The efficiency is very low, but fortunately there is a better way - let's create a single recovery queue for each client.
Thus the emergence of new problems, there is no clear judgment queue response belongs to which request. This time coorrelationId property played a role. We will request this attribute for each set to a unique value. Later, when we received the message queue in the recovery, we will view this property, based on the value of this property, we are able to request the corresponding response of each match. If we encounter an unknown correlationId value, we can safely discard the message - because it does not belong to any of our requests.

You may ask, why should we in the recovery which ignore the message queue unknown, rather than the end of an error? Because in the server race conditions, this situation is possible. RPC server sends to us after the promise, before sending a confirmation message is dead, although this is unlikely, but it may still exist. If this happens, then RPC server is restarted, it will process the request again. That is why we want to gently handle duplicate responses under this RPC Ideally idempotent.

Summary

Our RPC will work like this:

When a client starts, it will create an anonymous exclusive recovered queue.
For a RPC request, the client sends a message has two attributes: replyTo, recycling and correlationId queue to be transmitted for each request is unique value.
This request is sent to rpc_queue queue.
This RPC worker (also known as: server) waiting requests in the queue. When the request occurs, it processes the job and sends the results to bring information to the client, use the queue is a message property replTo in that.
The client waits for data recovery in the queue. When a message appears, it checks correlationId property. If it complies with the request value, it will return this response to the application.

Put all together

Fibonacci tasks:

private static int fib (int n) throws Exception {
    if (n == 0) return 0;
    if (n == 1) return 1;
    return fib (n-1) + fib (n-2);
}
We declare our Fibonacci function. It assumes that a valid positive integer as input parameters. (Do not expect that can handle large numbers, it may be the slowest to achieve a recursive).
Our RPC server RPCServer.java code:

private static final String RPC_QUEUE_NAME = "rpc_queue";

ConnectionFactory factory = new ConnectionFactory ();
factory.setHost ( "localhost");

Connection connection = factory.newConnection ();
Channel channel = connection.createChannel ();

channel.queueDeclare (RPC_QUEUE_NAME, false, false, false, null);

channel.basicQos (1);

QueueingConsumer consumer = new QueueingConsumer (channel);
channel.basicConsume (RPC_QUEUE_NAME, false, consumer);

System.out.println ( "[x] Awaiting RPC requests");

while (true) {
    QueueingConsumer.Delivery delivery = consumer.nextDelivery ();

    BasicProperties props = delivery.getProperties ();
    BasicProperties replyProps = new BasicProperties
                                     .Builder ()
                                     .correlationId (props.getCorrelationId ())
                                     .build ();

    String message = new String (delivery.getBody ());
    int n = Integer.parseInt (message);

    System.out.println ( "[.] Fib (" + message + ")");
    String response = "" + fib (n);

    channel.basicPublish ( "", props.getReplyTo (), replyProps, response.getBytes ());

    channel.basicAck (delivery.getEnvelope () getDeliveryTag (), false.);
}
This server code is pretty straightforward:
As usual, we begin to establish a connection, and statements channel queue.
We may want to run more than one server process. In order to balance the load across multiple servers, we need to set channel.basicQos the prefetchCount property.
We use basicConsume access queue. Then enter the while loop, we wait for a request message processing, sends a response.

We RPC client RPCClient.java code:

private Connection connection;
private Channel channel;
private String requestQueueName = "rpc_queue";
private String replyQueueName;
private QueueingConsumer consumer;

public RPCClient () throws Exception {
    ConnectionFactory factory = new ConnectionFactory ();
    factory.setHost ( "localhost");
    connection = factory.newConnection ();
    channel = connection.createChannel ();

    replyQueueName = channel.queueDeclare () getQueue ().;
    consumer = new QueueingConsumer (channel);
    channel.basicConsume (replyQueueName, true, consumer);
}

public String call (String message) throws Exception {
    String response = null;
    String corrId = java.util.UUID.randomUUID () toString ().;

    BasicProperties props = new BasicProperties
                                .Builder ()
                                .correlationId (corrId)
                                .replyTo (replyQueueName)
                                .build ();

    channel.basicPublish ( "", requestQueueName, props, message.getBytes ());

    while (true) {
        QueueingConsumer.Delivery delivery = consumer.nextDelivery ();
        if (delivery.getProperties (). getCorrelationId (). equals (corrId)) {
            response = new String (delivery.getBody ());
            break;
        }
    }

    return response;
}

public void close () throws Exception {
    connection.close ();
}
The client code is slightly more involved:
This client code is more clear:
We establish a connection and an exclusive channel and declare callback queue to wait for a reply.
We subscribe to the callback queue, so that we can receive the RPC response.
We call this method to do the real RPC requests.
Next, we first generate a unique digital correlationId and save it, and use this value to find the right response in the cycle.
Next, we release request message, with two properties: replyTo and correlationId.
At this time, we can sit down, waiting for the appropriate response to arrive.
This made a simple loop de work, check each response message correlationId value, whether it is looking for. If so, it will save this response.
Ultimately, we have the response back to the user.

Manufacturing client requests:

RPCClient fibonacciRpc = new RPCClient ();

System.out.println ( "[x] Requesting fib (30)");
String response = fibonacciRpc.call ( "30");
System.out.println ( "[.] Got '" + response + "'");

fibonacciRpc.close ();
Now is the time for us to review our RPCClient.java and RPCServer.java in all source code examples (includes basic exception handling).
As usual compile and build class path (see the first part of the guide)

$ Javac -cp rabbitmq-client.jar RPCClient.java RPCServer.java
Our RPC services are now ready, we start the server:

$ Java -cp $ CP RPCServer
 [X] Awaiting RPC requests
To request a Fibonacci number, run the client:

$ Java -cp $ CP RPCClient
 [X] Requesting fib (30)
Now not only can design an RPC service, and it has several important advantages:
If the RPC server response is too slow, you can be extended by another program. Try to control by a new platform to run the second RPC server. In the client side, RPC requires only one message sent and received. Like queueDeclare asynchronous call is required. Therefore, RPC clients need only a single network cycle RPC requests.

Our code has been very simple, does not try to solve the more complex (but important) issues, such as:
If there is no server, client how to respond?
Whether the client to the RPC timeouts process?
If the server fails, throw an exception, should it be passed on to the client?
Prior to entering the process to isolate the illegal message out (checking boundaries, type).
     
         
         
         
  More:      
 
- Partition contrast manifestations under Windows and Linux (Linux)
- To generate a certificate using OpenSSL under Linux (Server)
- Modern Objective-C syntax and new features (Programming)
- Nmcli based network management command-line tool (Linux)
- Android Studio and Git Git configuration file status (Linux)
- FFmpeg compiled with only H264 decoding library (Programming)
- How to Install Sticky Notes on Ubuntu and Derivatives (Linux)
- ActiveMQ memory settings and flow control (Linux)
- CentOS 6.6 running level (Linux)
- Setting Wetty do not need an account login command line operations (Linux)
- Linux data recovery software efficiently practical application extundelete (Linux)
- Linux Security (Linux)
- Java string intern constant pool resolution Introduction (Programming)
- grep, egrep and regular expressions (Linux)
- A script to make your Ubuntu 14.04 Memory screen brightness (Linux)
- How to contribute code on GitHub uploads (Linux)
- OpenVPN offsite interconnecting room availability and load balancing solution (Server)
- Elixir: the future of programming languages (Programming)
- TL-WR703N to install OpenWrt process notes (Linux)
- How to use GRUB2 files directly from the hard disk to run ISO (Linux)
     
           
     
  CopyRight 2002-2020 newfreesoft.com, All Rights Reserved.