|
Recently doing OpenStack Cinder driver performance tuning, until by the driver which joined decorator, complete statistics for each driver interface execution time.
In fact openstack, has been in an incubator called osprofiler the project, then this can be integrated with OpenStack Ceilometer, you can easily complete the statistical performance data, performance tuning substantial saving of time.
osprofiler principle:
By using osprofiler between OpenStack different Component of trace, record all wsgi, rpc, start and end times of each driver interface, and then send the rpc message by recording data to Ceilometer database for storage.
This allows the user after performing OpenStack operation by osprofiler the CLI interface to html or JSON format visual display order and the execution time of each interface, to find the bottleneck of a call stack.
More can be found on osprofier https://github.com/stackforge/osprofiler
[NOTE]: Some Internet users and reflect my own experiments in the latest master branch, can not produce the correct osprofiler data correctly, error as follows:
[To be added]
The solution is to use kilo version:
cd ~ / devstack
# Save the current change
git stash
git checkout stable / kilo
# Reapply the change
git stash pop
# Other configuration unchanged
After # let in ./stack.sh
./stack.sh
# Upgrade python-cinderclient, install python-ceilometerclient
sudo pip install python-cinderclient --upgrade
sudo pip install python-ceilometerclient
Basic use:
from osprofiler import profiler
# Before use, be sure to init, or will not use any data recording
profiler.init ( "SECRET_HMAC_KEY", base_id = 'sadfsdafasdfasdfas', parent_id = 'dsafafasdfsadf')
def some_func ():
profiler.start ( "point_name", { "any_key": "with_any_value"})
# Your code
print "I am between some_func"
profiler.stop ({ "any_info_about_point": "in_this_dict"})
@ Profiler.trace ( "point_name",
info = { "any_info_about_point": "in_this_dict"},
hide_args = False)
def some_func2 (* args, ** kwargs):
# If you need to hide args in profile info, put hide_args = True
print "Hello, osprofiler"
pass
def some_func3 ():
with profiler.Trace ( "point_name",
info = { "any_key": "with_any_value"}):
# Some code here
pass
@ Profiler.trace_cls ( "point_name", info = {}, hide_args = False,
trace_private = False)
class TracedClass (object):
def traced_method (self):
print "Trace me"
pass
def _traced_only_if_trace_private_true (self):
pass
# All records written to the json file inside
def send_info_to_file_collector (info, context = None):
with open ( "traces", "a") as f:
f.write (json.dumps (info))
notifier.set (send_info_to_file_collector)
# The following function calls are recorded was 11
some_func ()
some_func2 (test = 'asdfasdf', adf = 313)
trace = TracedClass ()
trace.traced_method ()
Then you traces files in the current directory asked find many log, there is a problem of data readability relatively poor, then OpenStack is how to solve it?
The answer is that with the use of Ceilometer. About Ceilometer, its reference framework can help understand http://docs.openstack.org/developer/ceilometer/architecture.html#high-level-architecture
The following lvm of cinder driver as an example of how to configure Cinder, osprofiler and Ceilometer integration ,,
First, setup devstack, the following is devstack local.conf configurations: (if you are the first time devstack, please move http://www.linuxidc.com/Linux/2016-01/127509.htm)
(Notice that I enable the Ceilometer Neutron and all components in the use of this document should HOST_IP, SERVICE_HOST to IP of the machine
[[Local | localrc]]
HOST_IP = 192.168.14.128
SERVICE_HOST = 192.168.14.128
ADMIN_PASSWORD = welcome
DATABASE_PASSWORD = $ ADMIN_PASSWORD
RABBIT_PASSWORD = $ ADMIN_PASSWORD
SERVICE_PASSWORD = $ ADMIN_PASSWORD
SERVICE_TOKEN = $ ADMIN_PASSWORD
DEST = / opt / stack
LOGFILE = $ DEST / logs / stack.sh.log
SCREEN_LOGDIR = $ DEST / logs / screen
OFFLINE = False
RECLONE = False
LOG_COLOR = False
disable_service horizon
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
# Enable the ceilometer metering services
enable_service ceilometer-acompute ceilometer-acentral ceilometer-anotification ceilometer-collector
# Enable the ceilometer alarming services
enable_service ceilometer-alarm-evaluator, ceilometer-alarm-notifier
# Enable the ceilometer api services
enable_service ceilometer-api
# The profiler must be added, is the key to cinder performance information recorded in the Ceilometer
CEILOMETER_NOTIFICATION_TOPICS = notifications, profiler
disable_service n-net
disable_service tempest
disable_service h-eng, h-api, h-api-cfn, h-api-cw
PHYSICAL_NETWORK = physnet1
FIXED_RANGE = 192.168.106.0 / 24
FIXED_NETWORK_SIZE = 32
NETWORK_GATEWAY = 192.168.106.1
[[Post-config | $ CINDER_CONF]]
[Profiler]
profiler_enabled = True
trace_sqlalchemy = False
[[Post-config | / $ Q_PLUGIN_CONF_FILE]]
[Ml2]
tenant_network_types = vlan
[Ml2_type_vlan]
network_vlan_ranges = physnet1: 100: 110
[Ovs]
bridge_mappings = physnet1: br-eth1
enable_tunneling = False
Then is executed ./stack.sh
Generate performance data collection Cinder operation:
peter @ Ubuntu: ~ / devstack $ cinder --profile SECRET_KEY create --name peter 1
+ --------- + --------------------------------------- ----------------------------- +
| Property | Value |
+ --------- + --------------------------------------- ----------------------------- +
| Attachments | [] |
| Availability_zone | nova |
| Bootable | false |
| Consistencygroup_id | None |
| Created_at | 2015-04-04T14: 58: 51.000000 |
| Description | None |
| Encrypted | False |
| Id | 28857983-3240-445d-a60b-3b91295c31e8 |
| Metadata | {} |
| Multiattach | False |
| Name | peter |
| Os-vol-host-attr: host | None |
| Os-vol-mig-status-attr: migstat | None |
| Os-vol-mig-status-attr: name_id | None |
| Os-vol-tenant-attr: tenant_id | ade7584debc54964b4fef737e56e062d |
| Os-volume-replication: driver_data | None |
| Os-volume-replication: extended_status | None |
| Replication_status | disabled |
| Size | 1 |
| Snapshot_id | None |
| Source_volid | None |
| Status | creating |
| User_id | 56aac792735046dea02e12e85e0d1a03 |
| Volume_type | lvmdriver-1 |
+ --------- + --------------------------------------- ----------------------------- +
Trace ID: aa4903cc-fd0c-42ef-96f1-bd1c5a1740f1
To display trace use next command:
osprofiler trace show --html aa4903cc-fd0c-42ef-96f1-bd1c5a1740f1
Export performance test data to HTML format
osprofiler trace show --html aa4903cc-fd0c-42ef-96f1-bd1c5a1740f1 --out test.html
html as follows (required FQ, this page needs to access google.com ^ _ ^)
View the execution time of each interface, the following figure shows the statistics for each interface is between different service execution time
See parameter interface, you can click on the function, see the specific parameters
The above view can easily achieve openstack each interface call execution time statistics, visualization shows the performance bottleneck where specific operations.
Cinder driver performance statistics for each interface:
The above work, there is a problem is not resolved, if the driver you actually have multiple levels of class (such as driver.create_volume-> AnotherClass.def1-> AnotherClass.def2-> AnotherClass.defn), so I only know the entry function driver.create_volume execution time, did not know at the time of execution AnotherClass inside each interface (def1 much time, def2 much time. this is necessary to slightly change the next lvm driver code, and for all class lvm driver plus the following decorator:
@ Profiler.trace_cls ( "AnotherClass", info = {}, hide_args = False,
trace_private = True)
class AnotherClass (object):
def def1:
pass
def def2:
pass
After this change the code, and then restart the cinder service, you will get more detailed data |
|
|
|