Solution (command line):
monit stop director
monit start director
After you do not have any feedback, wait a moment after the machine back to Bosh Cli view a virtual machine status as follows:
Deployment `bosh-openstack '
Director task 6
Task 6 done
+ ---------------------- + --------- + ---------------- + -------------------------- +
| Job / index | State | Resource Pool | IPs |
+ ---------------------- + --------- + ---------------- + -------------------------- +
| Blobstore / 0 | running | small | 50.50.0.23 |
| Director / 0 | running | small | 50.50.0.20, 10.68.19.132 |
| Health_monitor / 0 | running | small | 50.50.0.21 |
| Nats / 0 | running | small | 50.50.0.19 |
| Openstack_registry / 0 | running | small | 50.50.0.22, 10.68.19.133 |
| Postgres / 0 | running | small | 50.50.0.17 |
| Powerdns / 0 | running | small | 50.50.0.14, 10.68.19.131 |
| Redis / 0 | running | small | 50.50.0.18 |
+ ---------------------- + --------- + ---------------- + -------------------------- +
VMs total: 8
As can be seen, director task has started normally, then re-deploy the next bosh to normal, pay attention, do not delete the original deployements, bosh deploy directly to the command line.
root @ bosh-cli: / var / vcap / deployments # bosh deploy
Getting deployment properties from director ...
Compiling deployment manifest ...
Please review all changes carefully
Deploying `bosh.yml 'to` microbosh-openstack' (type 'yes' to continue): yes
Director task 9
Preparing deployment
binding deployment (00:00:00)
binding releases (00:00:00)
binding existing deployment (00:00:00)
binding resource pools (00:00:00)
binding stemcells (00:00:00)
binding templates (00:00:00)
binding properties (00:00:00)
binding unallocated VMs (00:00:00)
binding instance networks (00:00:00)
Done 9/9 00:00:00
Preparing package compilation
finding packages to compile (00:00:00)
Done 1/1 00:00:00
Preparing DNS
binding DNS (00:00:00)
Done 1/1 00:00:00
Preparing configuration
binding configuration (00:00:01)
Done 1/1 00:00:01
Updating job blobstore
blobstore / 0 (canary) (00:02:31)
Done 1/1 00:02:31
Updating job openstack_registry
openstack_registry / 0 (canary) (00:01:44)
Done 1/1 00:01:44
Updating job health_monitor
health_monitor / 0 (canary) (00:01:43)
Done 1/1 00:01:43
Task 9 done
Started 2013-05-21 05:17:01 UTC
Finished 2013-05-21 05:23:00 UTC
Duration 00:05:59
Deployed `bosh.yml 'to` microbosh-openstack'
3, deployment issues encountered CloudFoundry
3.1 Upload Cloudfoundry release packet problems encountered
Upload execute the command:
bosh upload release /var/vcap/releases/cf-release/CF_Release_VF-131.1-dev.tgz
Given as follows:
E, [2013-05-24T06: 38: 59.646076 # 24082] [task: 1] ERROR -: Could not create object, 400 /
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/blobstore_client-0.5.0/lib/blobstore_client/simple_blobstore_client.rb:30:in `create_file '
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/blobstore_client-0.5.0/lib/blobstore_client/base.rb:30:in `create '
/var/vcap/packages/director/bosh/director/lib/director/blob_util.rb:9:in `block in create_blob '
/var/vcap/packages/director/bosh/director/lib/director/blob_util.rb:9:in `open '
/var/vcap/packages/director/bosh/director/lib/director/blob_util.rb:9:in `create_blob '
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:373:in `create_package '
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:291:in `block (2 levels) in create_packages'
/var/vcap/packages/director/bosh/director/lib/director/event_log.rb:58:in `track '
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:289:in `block in create_packages'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:286:in `each '
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:286:in `create_packages'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:272:in `process_packages'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:131:in `process_release '
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:48:in `block in perform '
/var/vcap/packages/director/bosh/director/lib/director/lock_helper.rb:47:in `block in with_release_lock '
/var/vcap/packages/director/bosh/director/lib/director/lock.rb:58:in `lock '
/var/vcap/packages/director/bosh/director/lib/director/lock_helper.rb:47:in `with_release_lock '
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:48:in `perform '
/var/vcap/packages/director/bosh/director/lib/director/job_runner.rb:98:in `perform_job '
/var/vcap/packages/director/bosh/director/lib/director/job_runner.rb:29:in `block in run '
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/bosh_common-0.5.4/lib/common/thread_formatter.rb:46:in `with_thread_name '
/var/vcap/packages/director/bosh/director/lib/director/job_runner.rb:29:in `run '
/var/vcap/packages/director/bosh/director/lib/director/jobs/base_job.rb:8:in `perform '
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/job.rb:127:in `perform '
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:163:in `perform '
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:130:in `block in work '
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:116:in `loop '
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:116:in `work '
/ Var / vcap / packages / director / bosh / director / bin / worker: 77: in `
Management can not be created in the bosh machine blobstore virtual machine files, analyze the reasons may be the disk space problem, SSH to the virtual machine blodstore view the directory / var / vcap / store mount point size:
root @ 1154e252-382e-4cf7-bb2d-09adbc97a954: ~ # df -h
Filesystem Size Used Avail Use% Mounted on
/ Dev / vda1 1.3G 956M 273M 78% /
none 241M 168K 241M 1% / dev
none 247M 0 247M 0% / dev / shm
none 247M 52K 247M 1% / var / run
none 247M 0 247M 0% / var / lock
none 247M 0 247M 0% / lib / init / rw
/ Dev / vdb2 20G 246M 18G 2% / var / vcap / data
/ Dev / loop0 124M 5.6M 118M 5% / tmp
No mount point / var / vcap / store exceptionally mount, mount the root directory of the size of / dev / vda1, size is only 1.3G, the size of the image above is removed before uploading content after previously not delete, / dev / vda1 already 100%
Solution:
1, create size 20G (according to the actual situation) is / dev / vda2 disk, mount the / var / vcap / store, copy before the mount out of the files, and then copy into the mount after the results:
root @ 1154e252-382e-4cf7-bb2d-09adbc97a954: ~ # df -h
Filesystem Size Used Avail Use% Mounted on
/ Dev / vda1 1.3G 956M 273M 78% /
none 241M 168K 241M 1% / dev
none 247M 0 247M 0% / dev / shm
none 247M 52K 247M 1% / var / run
none 247M 0 247M 0% / var / lock
none 247M 0 247M 0% / lib / init / rw
/ Dev / vdb2 20G 246M 18G 2% / var / vcap / data
/ Dev / loop0 124M 5.6M 118M 5% / tmp
/ Dev / vda2 19G 1.8G 16G 11% / var / vcap / store
2, delete the command execution release
root @ bosh-cli: / var / vcap / deployments # bosh releases
+ --------------- + ----------- +
| Name | Versions |
+ --------------- + ----------- +
| CF_Release_VF | 131.1-dev |
+ --------------- + ----------- +
Releases total: 1
root @ bosh-cli: / var / vcap / deployments # bosh delete release CF_Release_VF
3, the implementation of release upload command again.
3.2, an error uploading cf-service release packet
The following error message:
root @ bosh-cli: ~ / src / cloudfoundry / cf-services-release # bosh upload release
Upload release `CF-VF-Service-Release-0.1-dev.yml 'to` bosh' (type 'yes' to continue): yes
Copying packages
----------------
ruby (0.1-dev) SKIP
libyaml (0.1-dev) SKIP
mysql (0.1-dev) FOUND LOCAL
ruby_next (0.1-dev) SKIP
postgresql_node (0.1-dev) FOUND LOCAL
mysqlclient (0.1-dev) SKIP
syslog_aggregator (0.1-dev) FOUND LOCAL
mysql_gateway (0.1-dev) FOUND LOCAL
postgresql92 (0.1-dev) FOUND LOCAL
mysql55 (0.1-dev) FOUND LOCAL
postgresql_gateway (0.1-dev) FOUND LOCAL
postgresql91 (0.1-dev) FOUND LOCAL
mysql_node (0.1-dev) FOUND LOCAL
postgresql (0.1-dev) FOUND LOCAL
sqlite (0.1-dev) SKIP
common (0.1-dev) SKIP
....
Release info
------------
Name: CF-VF-Service-Release
Version: 0.1-dev
Packages
- Ruby (0.1-dev)
- Libyaml (0.1-dev)
- Mysql (0.1-dev)
- Ruby_next (0.1-dev)
- Postgresql_node (0.1-dev)
- Mysqlclient (0.1-dev)
- Syslog_aggregator (0.1-dev)
- Mysql_gateway (0.1-dev)
- Postgresql92 (0.1-dev)
- Mysql55 (0.1-dev)
- Postgresql_gateway (0.1-dev)
- Postgresql91 (0.1-dev)
- Mysql_node (0.1-dev)
- Postgresql (0.1-dev)
- Sqlite (0.1-dev)
- Common (0.1-dev)
Jobs
- Mysql_node_external (0.1-dev)
- Postgresql_node (0.1-dev)
- Mysql_gateway (0.1-dev)
- Postgresql_gateway (0.1-dev)
- Mysql_node (0.1-dev)
- Rds_mysql_gateway (0.1-dev)
....
Director task 20
Extracting release
extracting release (00:00:08)
Done 1/1 00:00:08
Verifying manifest
verifying manifest (00:00:00)
Done 1/1 00:00:00
Resolving package dependencies
resolving package dependencies (00:00:00)
Done 1/1 00:00:00
Creating new packages
ruby / 0.1-dev: Could not fetch object, 404 / (00:00:01)
Error 1/16 00:00:01
Error 100: Could not fetch object, 404 /
Task 20 error
E, [2013-07-22T02: 45: 32.218762 # 16460] [task: 20] ERROR -: Could not fetch object, 404 /
/var/vcap/packages/director/gem_home/gems/blobstore_client-1.5.0.pre.3/lib/blobstore_client/dav_blobstore_client.rb:48:in `get_file '
/var/vcap/packages/director/gem_home/gems/blobstore_client-1.5.0.pre.3/lib/blobstore_client/base.rb:50:in `get '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/blob_util.rb:16:in `block (2 levels) in copy_blob '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/blob_util.rb:15:in `open '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/blob_util.rb:15:in `block in copy_blob '
/var/vcap/packages/ruby/lib/ruby/1.9.1/tmpdir.rb:83:in `mktmpdir '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/blob_util.rb:13:in `copy_blob '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:352:in `create_package '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:283:in `block (2 levels) in create_packages'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/event_log.rb:58:in `track '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:281:in `block in create_packages'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:278:in `each '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:278:in `create_packages'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:264:in `process_packages'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:134:in `process_release '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:47:in `block in perform '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/lock_helper.rb:47:in `block in with_release_lock '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/lock.rb:58:in `lock '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/lock_helper.rb:47:in `with_release_lock '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:47:in `perform '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_runner.rb:98:in `perform_job '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_runner.rb:29:in `block in run '
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_formatter.rb:46:in `with_thread_name '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_runner.rb:29:in `run '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/base_job.rb:8:in `perform '
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/job.rb:125:in `perform '
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:186:in `perform '
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:149:in `block in work '
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:128:in `loop '
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:128:in `work '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/bin/worker:74:in `
/ Var / vcap / packages / director / bin / worker: 23: in `load '
/ Var / vcap / packages / director / bin / worker: 23: in `
Preliminary analysis, Ruby's package reason SKIP, because bosh detected in the system before you upload it to the same version of the package, so skip to upload the package, but why behind this package is loaded when it can not find? Agonizing, still unable to determine the cause of the problem occurs; after re-analysis package upload is stored in the virtual machine blobstore among, but upload information is stored in another location, check blobstore state, running state, the normal, suddenly thought of before because network failure, restart the virtual machine over blobstore examination revealed var / vcap / store mount / dev / vda disappear, identify problems, resolve as follows:
1, remount / dev / vda2 to / var / vcap / store under
2, configure the virtual machine boot blobstore mount / dev / vda2
Perform upload command again, everything is normal.
3.3 An error occurred while deploying the CloudFoundry services
When deploying Services, because of the need to create a large number of virtual machines, creating a large number of Volumes, it may be because OpenStack environment itself Quota restrictions caused some errors, error is below the initial quota itself because Cinder 10. volume when creating more than 10 when an error occurs:
Bosh in the error message:
E, [2013-08-22T08: 45: 38.099667 # 8647] [task: 323] ERROR -: OpenStack API Request Entity Too Large error Check task debug log for details..
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/helpers.rb:20:in `cloud_error '
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/helpers.rb:39:in `rescue in with_openstack '
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/helpers.rb:25:in `with_openstack '
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/cloud.rb:361:in `block in create_disk '
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_formatter.rb:46:in `with_thread_name '
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/cloud.rb:342:in `create_disk '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:377:in `block in update_persistent_disk '
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/query.rb:338:in `_transaction '
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/query.rb:300:in `block in transaction '
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `block in synchronize '
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/connection_pool/threaded.rb:104:in `hold '
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `synchronize '
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/query.rb:293:in `transaction '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:376:in `update_persistent_disk '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:73:in `block in update '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:39:in `step '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:73:in `update '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_updater.rb:63:in `block (5 levels) in update '
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_formatter.rb:46:in `with_thread_name '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_updater.rb:60:in `block (4 levels) in update '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/event_log.rb:58:in `track '
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_updater.rb:59:in `block (3 levels) in update '
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_pool.rb:83:in `call '
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_pool.rb:83:in `block (2 levels) in create_thread '
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_pool.rb:67:in `loop '
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_pool.rb:67:in `block in create_thread '
OpenStack the error message:
2013-08-22 16:44:57 ERROR nova.api.openstack [req-75ffcfe3-34ea-4e8d-ab5f-685a890d4378 57be829ed997455f9600a4f46f7dbbef 8cf196acd0494fb0bc8d04e47ff77893] Caught error: VolumeLimitExceeded: Maximum number of volumes allowed (10) exceeded (HTTP 413) ( Request-ID: req-30e6e7d6-313d-46b1-9522-b3b20dd3e2ab)
2013-08-22 16:44:57 24345 TRACE nova.api.openstack Traceback (most recent call last):
2013-08-22 16:44:57 24345 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 78, in __call__
2013-08-22 16:44:57 24345 TRACE nova.api.openstack return req.get_response (self.application)
......
2013-08-22 16:44:57 24345 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 109, in request
2013-08-22 16:44:57 24345 TRACE nova.api.openstack raise exceptions.from_response (resp, body)
2013-08-22 16:44:57 24345 TRACE nova.api.openstack OverLimit: VolumeLimitExceeded: Maximum number of volumes allowed (10) exceeded (HTTP 413) (Request-ID: req-30e6e7d6-313d-46b1-9522-b3b20dd3e2ab )
2013-08-22 16:44:57 24345 TRACE nova.api.openstack
Workaround: Modify Cinder's quota quota, cinder and restart the service:
cinder quota-update
cd /etc/init.d/; for i in $ (ls cinder- *); do sudo service $ i restart; done