Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Use Bosh deploy CloudFoundry problems encountered on OpenStack     - Android Service service applications and the phone SMS Listener Listener (Programming)

- Zabbix monitoring of the switch (Server)

- Linux see whether there is a hacker program (Linux)

- Security basics: simple analytical framework for Linux system firewall (Linux)

- Jetty JNDI Development combat (Linux)

- Raspberry Pi configuration wireless hotspot (Linux)

- SQLite database commonly used sentences and visualization tools on MAC MeasSQLlite use (Database)

- To install Emacs under CentOS 6.5 (Linux)

- Use regular expressions to check whether the input box to enter a URL (Programming)

- HTML5 Application Cache (Programming)

- Linux modify the system time (Linux)

- How to Install lightweight Budgie desktop on Ubuntu 14.04 (v8) (Linux)

- To install Jetty server configuration in detail (Server)

- Formatted output printf command (Programming)

- Introduction to Linux Shell (Programming)

- Java class loading order (Programming)

- Configuring Proxy on a Unix terminal, accelerate Android Studio Construction (Linux)

- ls command: 15 Level Linux interview question (Linux)

- Find details block device with Linux blkid command (Linux)

- How to handle special characters in JSON (Programming)

 
         
  Use Bosh deploy CloudFoundry problems encountered on OpenStack
     
  Add Date : 2018-11-21      
         
         
         
  1, the problems encountered in the deployment of Micro Bosh

Error 1.1, execute micro bosh deployment commands on Micro Bosh virtual machine:

bosh micro deploy /var/vcap/stemcells/micro-bosh-stemcell-openstack-kvm-0.8.1.tgz

Error message:

Could not find Cloud Provider Plugin: openstack

This is actually not a real error message, you can view Thrown code file viewing, file location: /usr/local/rvm/gems/ruby-1.9.3-p374/gems/bosh_cpi-0.5.1/lib/cloud /provider.rb

The exception handling code block is removed, to read as follows, exposing the true exception:

module Bosh :: Clouds
  class Provider

    def self.create (plugin, options)
      require "cloud / # {plugin}"
      Bosh :: Clouds.const_get (plugin.capitalize) .new (options)
    end

  end
end

Exception information:

Failed to load plugin
gems / bosh_deployer-1.4.1 / lib / bosh / cli / commands / micro.rb: Unable to activate
fog-1.10.1, because net-scp-1.0.4 conflicts with net-scp (~> 1.1)

As can be seen, this is wrong because the gem package conflict, view the gem package dependency, it can be seen, Bosh's Gem Pack bosh_cli and bosh_deployer are dependent on fog packages and net-scp package, but not the same version requirements, resulting in the error.

Solution:

Uninstall bosh_cli, bosh_deployer, fog, all packages net-scp, and then use bundle of installations

gem uninstall bosh_cli
gem uninstall bosh_deployer
gem uninstall fog
gem uninstall net-scp

New Gemfile file in the root directory, the need to increase the installed Gems

source "https://RubyGems.org"
gem "bosh_cli"
gem "bosh_deployer"

After saving execution

# Install Gems
bundle install
# Verify Bosh Micro
bundle exec bosh micro

Problem solving, re-execute the command to deploy Micro Bosh

Experience: Sometimes the real reason will be given source package, so that the error message obtained from the log is not accurate, any solution can not be found in Google, through in-depth read and modify the source code after exposing the true anomaly reasons, find and fix the problem
1.2, endpoint version of the problem

bosh micro deploy /var/vcap/stemcells/micro-bosh-stemcell-openstack-kvm-0.8.1.tgz

Error message:

response => # "application / json "," Content-Length "=>" 340 "," Date "=>" Wed, 22 Aug 2013 16:38:30 GMT "}, @ status = 300>

It is advised to glance fog component endpoint does not support v2.0, v1.0 and v1.1 supports only

 

Workaround: the need to re-build the endpoint v1.0, execute the following command:

keystone endpoint-create
    --region RegionOne
    --service_id c933887a2e3341b18bdae2c92e6f1ba7
    --publicurl "http://10.68.19.61:9292/v1.0"
    --adminurl "http://10.68.19.61:9292/v1.0"
    --internalurl "http://10.68.19.61:9292/v1.0"

Note service_id and modify each url, and the need to remove the original version v2 endpoint, otherwise they will be reported to duplicate definitions.

Experience: Document a class of things, not to say that the official says must be right to any technical documents are skeptical, a problem first consider whether their operation is, after confirmation headed for the official the accuracy of the document, in the process of installing openstack, Bosh, CloudFoundry, the official documents do not indicate a lot of problems, but it does exist.

 

2, the problems encountered in the deployment of Bosh

2.1, quota over the issue when deploying Bosh

When executed bosh deploy deployment Bosh, you receive the following error message:

Creating bound missing VMs
  small / 2: Expected ([200, 202]) <=> Actual (413 Request Entity Too Large)
  request => {: connect_timeout => 60,: headers => { "Content-Type" => "application / json", "X-Auth-Token" => "38fe51b931184a30a287e71bc37cc05d", "Host" => "10.23.54.150 : 8774 "," Content-Length "=> 422},: instrumentor_name =>" excon ",: mock => false,: nonblock => true,: read_timeout => 60,: retry_limit => 4,: ssl_ca_file => "/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/excon-0.16.2/data/cacert.pem",: ssl_verify_peer => true,: write_timeout => 60 ,: host => "10.23.54.150",: path => "/ v2 / 69816bacecd749f9ba1d68b3c8bae1f1 / servers.json",: port => "8774",: query => "ignore_awful_caching1362453746",: scheme => "http", : body => "{\" server ": {\" flavorRef ": \" 25 ", \" imageRef ": \" e205b9ec-0e19-4500-87fe-ede3af13b227 ", \" name " : "vm-b875d6d8-81ce-483b-bfa8-d6d525aaf280 \", "metadata \": {}, "user_data \": "eyJyZWdpc3RyeSI6eyJlbmRwb2ludCI6Imh0dHA6Ly8xMC4yMy41MS4zNToy \ nNTc3NyJ9LCJzZXJ2ZXIiOnsibmFtZSI6InZtLWI4NzVkNmQ4LTgxY2UtNDgz \ nYi1iZmE4LWQ2ZDUyNWFhZjI4MCJ9LCJkbnMiOnsibmFtZXNlcnZlciI6WyIx \ nMC4yMy41NC4xMDgiXX19 \ n \", "key_name \": "jae2 \", "security_groups \": [{ "name \": "default \"}]}} ",: expects => [200, 202],: method = > "POST"}
  response => # " 0 "," Content-Length "=>" 121 "," Content-Type "=>" application / json; charset = UTF-8 "," X-Compute-Request-Id "=>" req-c5427ed2-62af-47b9-98a6-6f114893d8fc "," Date "=>" Tue, 05 Mar 2013 03:22:27 GMT "}, @ status = 413> (00:00:02)

This is the OpenStack project related insufficient quota quota resulting from errors that occur

 

Workaround: Modify openstack quota in quota of the project may be the number of CPU, memory, hard drives, IP and so the adjustment is large, and the original deployments delete command is as follows:

bosh deployments

bosh delete deployment name

Several examples yml file deployment before rolling back when operating, otherwise an error, prompting a vm find, if there are problems after the rollback, see if there is cached, and nova library and other related instances table whether there are not deleted.
2.2, RateLimit deployment Bosh when given a problem

Error 100: Expected ([200, 202]) <=> Actual (413 Request Entity Too Large)
  request => {: connect_timeout => 60,: headers => { "Content-Type" => "application / json", "X-Auth-Token" => "23bc718661d54252aba2d9c348c264e3", "Host" => "10.68.19.61 : 8774 "," Content-Length "=> 44},: instrumentor_name =>" excon ",: mock => false,: nonblock => true,: read_timeout => 60,: retry_limit => 4,: ssl_ca_file => "/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/excon-0.16.2/data/cacert.pem",: ssl_verify_peer => true,: write_timeout => 60 ,: host => "10.68.19.61",: path => "/ v2 / 8cf196acd0494fb0bc8d04e47ff77893 / servers / 046ac2d3-09b5-4abe-ab61-64b33d1348e1 / action.json",: port => "8774",: query => "ignore_awful_caching1366685706",: scheme => "http",: body => "{\" addFloatingIp ": {\" address ": \" 10.68.19.132 "}}",: expects => [200, 202 ],: method => "POST"}
  response => # "2", "Content-Length" => "161", "Content-Type" => "application / json; charset = UTF-8", "Date" => "Tue, 23 Apr 2013 02:55: 06 GMT "}, @ status = 413>

See Openstack documents, we can see information about the Limit:
Solution (nova modify configuration on all compute nodes: /etc/nova/api-paste.ini):

 

1, remove the Compute API Rate Limiting Configuration

[Composite: openstack_compute_api_v2]
use = call: nova.api.auth: pipeline_factory
noauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2
keystone = faultwrap sizelimit authtoken keystonecontext ratelimit osapi_compute_app_v2
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v2

[Composite: openstack_volume_api_v1]
use = call: nova.api.auth: pipeline_factory
noauth = faultwrap sizelimit noauth ratelimit osapi_volume_app_v1
keystone = faultwrap sizelimit authtoken keystonecontext ratelimit osapi_volume_app_v1
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_volume_app_v1

Remove one ratelimit the filter can be.

 

2, modify Compute API Rate Limiting Value, in the [filter: ratelimit] By increasing the limits to configure and modify the value, the following code:

[Filter: ratelimit]
paste.filter_factory = nova.api.openstack.compute.limits: RateLimitingMiddleware.factory
limits = (POST, "*", *, 10, MINUTE.); (POST, "* / servers", ^ / servers, 50, DAY); (. PUT, "*", *, 10, MINUTE); (GET, "* changes-since *", * changes-since *, 3, MINUTE..); (. DELETE, "*", *, 100, MINUTE)

After the restart modify nova-api services, reference Openstack official documents: http: //docs.openstack.org/essex/openstack-compute/admin/content/configuring-compute-API.html#d6e1844
2.3 deployment Bosh director encountered a communication error

Executive Post bosh bosh deploy task when the time to perform the update director, because the virtual machine communication error caused the failure, the error appears as follows:

Updating job director
director / 0 (canary) (00:01:58)
Done 1/1 00:01:58

Error 400007: `director / 0 'is not running after update

Task 3 error

Bosh vms command line is shown below:

Deployment `bosh-openstack '

Director task 5

Task 5 done

+ ---------------------- + --------- + ---------------- + -------------------------- +
| Job / index | State | Resource Pool | IPs |
+ ---------------------- + --------- + ---------------- + -------------------------- +
| Blobstore / 0 | running | small | 50.50.0.23 |
| Director / 0 | failing | small | 50.50.0.20, 10.68.19.132 |
| Health_monitor / 0 | running | small | 50.50.0.21 |
| Nats / 0 | running | small | 50.50.0.19 |
| Openstack_registry / 0 | running | small | 50.50.0.22, 10.68.19.133 |
| Postgres / 0 | running | small | 50.50.0.17 |
| Powerdns / 0 | running | small | 50.50.0.14, 10.68.19.131 |
| Redis / 0 | running | small | 50.50.0.18 |
+ ---------------------- + --------- + ---------------- + -------------------------- +

VMs total: 8

Director can see the status of tasks is failing, is connected to the virtual machine can see the director's view the log:

/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/connection/ruby.rb:26:in `initialize ': getaddrinfo: Name or service not known (SocketError)
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/connection/ruby.rb:26:in `new '
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/connection/ruby.rb:26:in `block in connect '
from /var/vcap/data/packages/ruby/2.1/lib/ruby/1.9.1/timeout.rb:57:in `timeout '
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/connection/ruby.rb:124:in `with_timeout '
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/connection/ruby.rb:25:in `connect '
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/client.rb:204:in `establish_connection '
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/client.rb:23:in `connect '
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/client.rb:224:in `ensure_connected '
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/client.rb:114:in `block in process'
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/client.rb:191:in `logging '
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/client.rb:113:in `process'
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/client.rb:38:in `call '
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis.rb:150:in `block in get '
from /var/vcap/data/packages/ruby/2.1/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize '
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis.rb:149:in `get '
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:425:in `job '
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:357:in `unregister_worker '
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:145:in `ensure in work '
from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:145:in `work '
from / var / vcap / packages / director / bosh / director / bin / worker: 77: in `
'

Solution (command line):

monit stop director
monit start director

After you do not have any feedback, wait a moment after the machine back to Bosh Cli view a virtual machine status as follows:

Deployment `bosh-openstack '
Director task 6

Task 6 done

+ ---------------------- + --------- + ---------------- + -------------------------- +
| Job / index | State | Resource Pool | IPs |
+ ---------------------- + --------- + ---------------- + -------------------------- +
| Blobstore / 0 | running | small | 50.50.0.23 |
| Director / 0 | running | small | 50.50.0.20, 10.68.19.132 |
| Health_monitor / 0 | running | small | 50.50.0.21 |
| Nats / 0 | running | small | 50.50.0.19 |
| Openstack_registry / 0 | running | small | 50.50.0.22, 10.68.19.133 |
| Postgres / 0 | running | small | 50.50.0.17 |
| Powerdns / 0 | running | small | 50.50.0.14, 10.68.19.131 |
| Redis / 0 | running | small | 50.50.0.18 |
+ ---------------------- + --------- + ---------------- + -------------------------- +

VMs total: 8

As can be seen, director task has started normally, then re-deploy the next bosh to normal, pay attention, do not delete the original deployements, bosh deploy directly to the command line.

root @ bosh-cli: / var / vcap / deployments # bosh deploy
Getting deployment properties from director ...
Compiling deployment manifest ...
Please review all changes carefully
Deploying `bosh.yml 'to` microbosh-openstack' (type 'yes' to continue): yes

Director task 9

Preparing deployment
  binding deployment (00:00:00)
  binding releases (00:00:00)
  binding existing deployment (00:00:00)
  binding resource pools (00:00:00)
  binding stemcells (00:00:00)
  binding templates (00:00:00)
  binding properties (00:00:00)
  binding unallocated VMs (00:00:00)
  binding instance networks (00:00:00)
Done 9/9 00:00:00

Preparing package compilation
  finding packages to compile (00:00:00)
Done 1/1 00:00:00

Preparing DNS
  binding DNS (00:00:00)
Done 1/1 00:00:00

Preparing configuration
  binding configuration (00:00:01)
Done 1/1 00:00:01

Updating job blobstore
  blobstore / 0 (canary) (00:02:31)
Done 1/1 00:02:31

Updating job openstack_registry
  openstack_registry / 0 (canary) (00:01:44)
Done 1/1 00:01:44

Updating job health_monitor
  health_monitor / 0 (canary) (00:01:43)
Done 1/1 00:01:43

Task 9 done
Started 2013-05-21 05:17:01 UTC
Finished 2013-05-21 05:23:00 UTC
Duration 00:05:59

Deployed `bosh.yml 'to` microbosh-openstack'

3, deployment issues encountered CloudFoundry

3.1 Upload Cloudfoundry release packet problems encountered

Upload execute the command:

bosh upload release /var/vcap/releases/cf-release/CF_Release_VF-131.1-dev.tgz

Given as follows:

E, [2013-05-24T06: 38: 59.646076 # 24082] [task: 1] ERROR -: Could not create object, 400 /
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/blobstore_client-0.5.0/lib/blobstore_client/simple_blobstore_client.rb:30:in `create_file '
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/blobstore_client-0.5.0/lib/blobstore_client/base.rb:30:in `create '
/var/vcap/packages/director/bosh/director/lib/director/blob_util.rb:9:in `block in create_blob '
/var/vcap/packages/director/bosh/director/lib/director/blob_util.rb:9:in `open '
/var/vcap/packages/director/bosh/director/lib/director/blob_util.rb:9:in `create_blob '
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:373:in `create_package '
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:291:in `block (2 levels) in create_packages'
/var/vcap/packages/director/bosh/director/lib/director/event_log.rb:58:in `track '
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:289:in `block in create_packages'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:286:in `each '
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:286:in `create_packages'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:272:in `process_packages'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:131:in `process_release '
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:48:in `block in perform '
/var/vcap/packages/director/bosh/director/lib/director/lock_helper.rb:47:in `block in with_release_lock '
/var/vcap/packages/director/bosh/director/lib/director/lock.rb:58:in `lock '
/var/vcap/packages/director/bosh/director/lib/director/lock_helper.rb:47:in `with_release_lock '
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:48:in `perform '
/var/vcap/packages/director/bosh/director/lib/director/job_runner.rb:98:in `perform_job '
/var/vcap/packages/director/bosh/director/lib/director/job_runner.rb:29:in `block in run '
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/bosh_common-0.5.4/lib/common/thread_formatter.rb:46:in `with_thread_name '
/var/vcap/packages/director/bosh/director/lib/director/job_runner.rb:29:in `run '
/var/vcap/packages/director/bosh/director/lib/director/jobs/base_job.rb:8:in `perform '
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/job.rb:127:in `perform '
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:163:in `perform '
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:130:in `block in work '
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:116:in `loop '
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:116:in `work '
/ Var / vcap / packages / director / bosh / director / bin / worker: 77: in `
'

Management can not be created in the bosh machine blobstore virtual machine files, analyze the reasons may be the disk space problem, SSH to the virtual machine blodstore view the directory / var / vcap / store mount point size:

root @ 1154e252-382e-4cf7-bb2d-09adbc97a954: ~ # df -h
Filesystem Size Used Avail Use% Mounted on
/ Dev / vda1 1.3G 956M 273M 78% /
none 241M 168K 241M 1% / dev
none 247M 0 247M 0% / dev / shm
none 247M 52K 247M 1% / var / run
none 247M 0 247M 0% / var / lock
none 247M 0 247M 0% / lib / init / rw
/ Dev / vdb2 20G 246M 18G 2% / var / vcap / data
/ Dev / loop0 124M 5.6M 118M 5% / tmp

No mount point / var / vcap / store exceptionally mount, mount the root directory of the size of / dev / vda1, size is only 1.3G, the size of the image above is removed before uploading content after previously not delete, / dev / vda1 already 100%

 

Solution:

1, create size 20G (according to the actual situation) is / dev / vda2 disk, mount the / var / vcap / store, copy before the mount out of the files, and then copy into the mount after the results:

root @ 1154e252-382e-4cf7-bb2d-09adbc97a954: ~ # df -h
Filesystem Size Used Avail Use% Mounted on
/ Dev / vda1 1.3G 956M 273M 78% /
none 241M 168K 241M 1% / dev
none 247M 0 247M 0% / dev / shm
none 247M 52K 247M 1% / var / run
none 247M 0 247M 0% / var / lock
none 247M 0 247M 0% / lib / init / rw
/ Dev / vdb2 20G 246M 18G 2% / var / vcap / data
/ Dev / loop0 124M 5.6M 118M 5% / tmp
/ Dev / vda2 19G 1.8G 16G 11% / var / vcap / store

2, delete the command execution release

root @ bosh-cli: / var / vcap / deployments # bosh releases

+ --------------- + ----------- +
| Name | Versions |
+ --------------- + ----------- +
| CF_Release_VF | 131.1-dev |
+ --------------- + ----------- +

Releases total: 1
root @ bosh-cli: / var / vcap / deployments # bosh delete release CF_Release_VF

3, the implementation of release upload command again.

3.2, an error uploading cf-service release packet

The following error message:

root @ bosh-cli: ~ / src / cloudfoundry / cf-services-release # bosh upload release
Upload release `CF-VF-Service-Release-0.1-dev.yml 'to` bosh' (type 'yes' to continue): yes

Copying packages
----------------
ruby (0.1-dev) SKIP
libyaml (0.1-dev) SKIP
mysql (0.1-dev) FOUND LOCAL
ruby_next (0.1-dev) SKIP
postgresql_node (0.1-dev) FOUND LOCAL
mysqlclient (0.1-dev) SKIP
syslog_aggregator (0.1-dev) FOUND LOCAL
mysql_gateway (0.1-dev) FOUND LOCAL
postgresql92 (0.1-dev) FOUND LOCAL
mysql55 (0.1-dev) FOUND LOCAL
postgresql_gateway (0.1-dev) FOUND LOCAL
postgresql91 (0.1-dev) FOUND LOCAL
mysql_node (0.1-dev) FOUND LOCAL
postgresql (0.1-dev) FOUND LOCAL
sqlite (0.1-dev) SKIP
common (0.1-dev) SKIP

....

Release info
------------
Name: CF-VF-Service-Release
Version: 0.1-dev

Packages
  - Ruby (0.1-dev)
  - Libyaml (0.1-dev)
  - Mysql (0.1-dev)
  - Ruby_next (0.1-dev)
  - Postgresql_node (0.1-dev)
  - Mysqlclient (0.1-dev)
  - Syslog_aggregator (0.1-dev)
  - Mysql_gateway (0.1-dev)
  - Postgresql92 (0.1-dev)
  - Mysql55 (0.1-dev)
  - Postgresql_gateway (0.1-dev)
  - Postgresql91 (0.1-dev)
  - Mysql_node (0.1-dev)
  - Postgresql (0.1-dev)
  - Sqlite (0.1-dev)
  - Common (0.1-dev)

Jobs
  - Mysql_node_external (0.1-dev)
  - Postgresql_node (0.1-dev)
  - Mysql_gateway (0.1-dev)
  - Postgresql_gateway (0.1-dev)
  - Mysql_node (0.1-dev)
  - Rds_mysql_gateway (0.1-dev)

....

Director task 20

Extracting release
  extracting release (00:00:08)
Done 1/1 00:00:08

Verifying manifest
  verifying manifest (00:00:00)
Done 1/1 00:00:00

Resolving package dependencies
  resolving package dependencies (00:00:00)
Done 1/1 00:00:00

Creating new packages
  ruby / 0.1-dev: Could not fetch object, 404 / (00:00:01)
Error 1/16 00:00:01

Error 100: Could not fetch object, 404 /

Task 20 error

E, [2013-07-22T02: 45: 32.218762 # 16460] [task: 20] ERROR -: Could not fetch object, 404 /
/var/vcap/packages/director/gem_home/gems/blobstore_client-1.5.0.pre.3/lib/blobstore_client/dav_blobstore_client.rb:48:in `get_file '
/var/vcap/packages/director/gem_home/gems/blobstore_client-1.5.0.pre.3/lib/blobstore_client/base.rb:50:in `get '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/blob_util.rb:16:in `block (2 levels) in copy_blob '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/blob_util.rb:15:in `open '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/blob_util.rb:15:in `block in copy_blob '
/var/vcap/packages/ruby/lib/ruby/1.9.1/tmpdir.rb:83:in `mktmpdir '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/blob_util.rb:13:in `copy_blob '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:352:in `create_package '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:283:in `block (2 levels) in create_packages'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/event_log.rb:58:in `track '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:281:in `block in create_packages'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:278:in `each '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:278:in `create_packages'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:264:in `process_packages'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:134:in `process_release '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:47:in `block in perform '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/lock_helper.rb:47:in `block in with_release_lock '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/lock.rb:58:in `lock '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/lock_helper.rb:47:in `with_release_lock '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:47:in `perform '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_runner.rb:98:in `perform_job '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_runner.rb:29:in `block in run '
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_formatter.rb:46:in `with_thread_name '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_runner.rb:29:in `run '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/base_job.rb:8:in `perform '
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/job.rb:125:in `perform '
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:186:in `perform '
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:149:in `block in work '
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:128:in `loop '
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:128:in `work '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/bin/worker:74:in ` '
/ Var / vcap / packages / director / bin / worker: 23: in `load '
/ Var / vcap / packages / director / bin / worker: 23: in `
'

Preliminary analysis, Ruby's package reason SKIP, because bosh detected in the system before you upload it to the same version of the package, so skip to upload the package, but why behind this package is loaded when it can not find? Agonizing, still unable to determine the cause of the problem occurs; after re-analysis package upload is stored in the virtual machine blobstore among, but upload information is stored in another location, check blobstore state, running state, the normal, suddenly thought of before because network failure, restart the virtual machine over blobstore examination revealed var / vcap / store mount / dev / vda disappear, identify problems, resolve as follows:

 

1, remount / dev / vda2 to / var / vcap / store under

2, configure the virtual machine boot blobstore mount / dev / vda2

Perform upload command again, everything is normal.

3.3 An error occurred while deploying the CloudFoundry services

When deploying Services, because of the need to create a large number of virtual machines, creating a large number of Volumes, it may be because OpenStack environment itself Quota restrictions caused some errors, error is below the initial quota itself because Cinder 10. volume when creating more than 10 when an error occurs:

Bosh in the error message:

E, [2013-08-22T08: 45: 38.099667 # 8647] [task: 323] ERROR -: OpenStack API Request Entity Too Large error Check task debug log for details..
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/helpers.rb:20:in `cloud_error '
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/helpers.rb:39:in `rescue in with_openstack '
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/helpers.rb:25:in `with_openstack '
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/cloud.rb:361:in `block in create_disk '
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_formatter.rb:46:in `with_thread_name '
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/cloud.rb:342:in `create_disk '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:377:in `block in update_persistent_disk '
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/query.rb:338:in `_transaction '
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/query.rb:300:in `block in transaction '
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `block in synchronize '
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/connection_pool/threaded.rb:104:in `hold '
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `synchronize '
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/query.rb:293:in `transaction '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:376:in `update_persistent_disk '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:73:in `block in update '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:39:in `step '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:73:in `update '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_updater.rb:63:in `block (5 levels) in update '
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_formatter.rb:46:in `with_thread_name '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_updater.rb:60:in `block (4 levels) in update '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/event_log.rb:58:in `track '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_updater.rb:59:in `block (3 levels) in update '
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_pool.rb:83:in `call '
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_pool.rb:83:in `block (2 levels) in create_thread '
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_pool.rb:67:in `loop '
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_pool.rb:67:in `block in create_thread '

OpenStack the error message:

2013-08-22 16:44:57 ERROR nova.api.openstack [req-75ffcfe3-34ea-4e8d-ab5f-685a890d4378 57be829ed997455f9600a4f46f7dbbef 8cf196acd0494fb0bc8d04e47ff77893] Caught error: VolumeLimitExceeded: Maximum number of volumes allowed (10) exceeded (HTTP 413) ( Request-ID: req-30e6e7d6-313d-46b1-9522-b3b20dd3e2ab)
2013-08-22 16:44:57 24345 TRACE nova.api.openstack Traceback (most recent call last):
2013-08-22 16:44:57 24345 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 78, in __call__
2013-08-22 16:44:57 24345 TRACE nova.api.openstack return req.get_response (self.application)
......
2013-08-22 16:44:57 24345 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 109, in request
2013-08-22 16:44:57 24345 TRACE nova.api.openstack raise exceptions.from_response (resp, body)
2013-08-22 16:44:57 24345 TRACE nova.api.openstack OverLimit: VolumeLimitExceeded: Maximum number of volumes allowed (10) exceeded (HTTP 413) (Request-ID: req-30e6e7d6-313d-46b1-9522-b3b20dd3e2ab )
2013-08-22 16:44:57 24345 TRACE nova.api.openstack

Workaround: Modify Cinder's quota quota, cinder and restart the service:

cinder quota-update --volumes 20

cd /etc/init.d/; for i in $ (ls cinder- *); do sudo service $ i restart; done
     
         
         
         
  More:      
 
- Linux can modify the maximum number of open files (Linux)
- C ++ class implementation date operator overloading (Programming)
- Linux platform NTOP Installation and Configuration (Linux)
- Ubuntu under Spark development environment to build (Server)
- Install KVM on Ubuntu and build a virtual environment (Linux)
- Python extension module Ganglia 3.1.x (Linux)
- shell script: LVS start simple script (Server)
- C ++ based foundation: the difference between C and C ++ (Programming)
- Python pickle module for simple use notes (Programming)
- Programmer editor Vim (Linux)
- To install CentOS 6.5 on your hard drive under Windows 7 (Linux)
- The most concise explanation of JavaScript closures (Programming)
- Linux system package manager (rpm, yum, source packages installation) (Linux)
- The free command in Linux (Linux)
- The istgt PSD on ported to Mac OS X (Linux)
- Ubuntu System Log Configuration / var / log / messages (Linux)
- Python implementation Bursa transition model (Programming)
- 10 useful tools for Linux users (Linux)
- Python implementation restart the router (Programming)
- The official release method to upgrade to Ubuntu 15.04 (Linux)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.