I. Description
Solr5 built Jetty service, so do not install and deploy to Tomcat, the Tomcat deployment information online too vast.
preparations for pre-deployment:
1. Each host IP configured as static IP (ensure that each host can communicate properly, to avoid excessive network traffic, it is recommended in the same network segment).
2 to modify the host name, the configuration of the host mapping; modify the hosts file, add the mapping for each host IP and host name.
3. Open the corresponding port or directly off the firewall.
4. Ensure Zookeeper cluster service running. Zookeeper deployment reference: http: //www.cnblogs.com/wxisme/p/5178211.html
5. Root privileges
6 illustrate the relevant parameters and components:
Collection: Complete index on SolrCloud cluster logical sense. It is often divided into one or more of the Shard, they use the same Config Set. If Shard number more than one, it is a distributed index, SolrCloud Collection lets you refer to it by name, without having to be concerned about when you need to use distributed search and Shard related parameters.
Core: is Solr Core, a Solr contains one or more Solr Core, may be provided independently of each Solr Core indexing and query functions, each corresponding to an index or Solr Core Collection of Shard, Solr Core is proposed In order to increase management flexibility and shared resources. There are different points in SolrCloud is that it uses the configuration in Zookeeper, traditional Solr core configuration file is on disk configuration directory.
Leader: Shard replicas win the election. Each Shard has multiple Replicas, these elections Replicas need to determine a Leader. Elections can occur at any time, but usually they are triggered only when a Solr instance fails. When indexing documents, SolrCloud will pass them on to this Shard corresponding leader, leader and then distribute them to all the Shard of replicas.
Replica: Shard of a copy. Each Replica exist in a Core Solr's. A named "test" the collection to create numShards = 1, and specify replicationFactor set to 2, which will produce two replicas, there is a corresponding 2 Core, each different machine or Solr instances. One will be named test_shard1_replica1, another named test_shard1_replica2. One of them is elected as the Leader.
Shard: Collection of logic slices. Each Shard is into one or more replicas, which are determined by election Leader.
II. Installation procedure
1. To Apache Solr-5.2.1 official website to download the installation package
. 2 into the directory Solr file, execute the following command to extract the compressed package from the installation script:
tar -xvzf solr-5.2.1.tgz solr-5.2.1 / bin / install_solr_service.sh --strip-components = 2
3. Run the installation script
execute the following command to install Solr:
./ install_solr_service.sh solr-5.2.1.tgz -i / usr / solr / solr5 -d / usr / solr / solr5 -u solr -s solr -p 8983
or execute the following command to install with default values:
./ install_solr_service.sh solr-5.2.1.tgz
In fact, you can also directly extract the installation package, and then customize the configuration can be.
4. Modify the configuration
Run the following command to edit solr.in.sh file:
vim /usr/solr5/solr.in.sh
reference to the following modifications:
SOLR_JAVA_MEM = "- Xms1G -Xmx1G"
ZK_HOST = "node1: 2181, node2: 2181, node3: 2181 / solr"
memory limit can be set as desired.
5. According to the above steps to install other nodes
III. Start Solr service and verify
. 1 execute the following command to start the service in Solr Solr cluster nodes:
service solr start
2. See Solr status
service solr status
3. Log Solr UI
http: // node1: 8983 / solr
IV. Testing Solr
Solr provides several common ways to operate, Shell commands, REST API, SolrJ interface, etc., according to the actual situation. The following operations for simple and convenient way to demonstrate the use of Shell command.
1. Open and edit server / solr / schema.xml file configsets / sample_techproducts_configs / conf under, just add a field at the end of the file.
2. Create a collection and upload the configuration file to associate Zookeeper.
./ bin / solr create_collection -c students -d server / solr / configsets / sample_techproducts_configs / conf -shards 3 -replicationFactor 3
If you later update the configuration file to the Zookeeper, you can use the following command to update all configurations:
./ server / scripts / cloud-scripts / zkcli.sh -zkhost node2: 2181, node1: 2181, node3: 2181 -cmd upconfig -confname students -confdir server / solr / configsets / sample_techproducts_configs / conf
If you update only a single file using putfile command:
./ server / scripts / cloud-scripts / zkcli.sh -zkhost node2: 2181, node1: 2181, node3: 2181 -cmd putfile /solr/configs/students/schema.xml /usr/tempfiles/schema.xml
The former path is the path in the configuration file is stored in Zookeeper, which is the profile of a local path. Note that if the file exists in Zookeeper need to delete it, and then upload the updates. Zookeeper can log in operation:
ZK_HOME /./ bin / zkCli.sh -timeout 5000 -server node3: 2181
After logging into Zookeeper you can use the command to delete the configuration files and other operations, if you do not know the profile of the position Zookeeper can also view.
3. Detecting whether the collection is successfully created, refresh the page in Solr UI, click the Cloud if successfully created a Collection will exhibit Solr cluster topology.
You can also view the definition schema.xml is in effect
4. Add the index, query data p>
1. Adding index data above can be used in several ways, using Solr for simplicity UI add simple index data for testing
Click Submit returns success information:
2. Query Test
After the test is finished you can remove Collection
http: // node1: 8983 / solr / admin / collections action = DELETE & name = students
simple to deploy and test this Solr is complete, you can then Solr-depth study of the working principle, the definition of the data structure, configuration, and queries.
Reference: Solr official documentation |