Ceilometer configuration¶
Note
This page only applies for Fleio OpenStack Edition or Fleio Full Edition.
Ceilometer is a required OpenStack services for Fleio. It gathers data and metrics about services and sends it to Gnocchi used by Fleio for billing.
Gnocchi archive policy¶
Fleio expects a specific Gnocchi archive policy. Create a new archive policy named fleio_policy
and set it as
default (fleio_policy is just an example, you can use any name you want just make sure you update config files as needed)
openstack metric archive-policy create -d granularity:5m,points:290 -d granularity:30m,points:336 -d granularity:1h,points:768 -m max -m mean -m sum fleio_policy
openstack metric archive-policy-rule create -a fleio_policy -m '*' default
Ceilometer configuration files¶
Please check the Ceilometer ceilometer.conf, pipeline.yaml , polling.yaml and the gnocchi_resources.yaml to make sure settings are similar to what’s shown below.
ceilometer.conf¶
Under the [DEFAULT] section, make sure the dispatcher is set to gnocchi
Note
This may not be needed, depending on your OpenStack distribution. In the newer versions of OpenStack these settings are not available anymore.
[DEFAULT]
...
event_dispatchers = gnocchi
meter_dispatchers = gnocchi
...
# the following settings are needed for ceph object store
[service_types]
radosgw = object-store
[rgw_admin_credentials]
access_key = <key>
secret_key = <key>
Ceph access_key and secret_key may be obtained using the following ceph commands:
radosgw-admin user create --uid admin --display-name "admin user" --caps "usage=read,write;metadata=read,write;users=read,write;buckets=read,write"
radosgw-admin user info --uid admin | jq '.keys'
Note
This may not be needed, depending on your OpenStack distribution. In the newer versions of OpenStack these settings are not available anymore.
Under the [api] section also make sure gnocchi is enabled
...
[api]
gnocchi_is_enabled = True
...
The [notifications] settings should be similar to:
[notification]
# Notification queues to listen on
# Ceilometer needs to connect to it's own notification RabbitMQ vhost and to all RabbitMQ vhosts that exists for all services
# Do not confuse a RabbitMQ vhost with normal virtual host or server. RabbitMQ can have it's own vhosts like /swift /nova, /cinder as shown below.
# If only one vhost exists, there is no need to add multiple messaging_urls
messaging_urls = rabbit://ceilometer:password@rabbitip:5672//ceilometer?
messaging_urls = rabbit://swift:password@rabbitip:5672//swift
messaging_urls = rabbit://nova:password@rabbitip:5672//nova?
messaging_urls = rabbit://cinder:password@rabbitip:5672//cinder?
messaging_urls = rabbit://neutron:password@rabbitip:5672//neutron?
store_events = False
workers = 3 # set to a value according to your load
event_pipeline.yaml¶
Note
This may not be needed, depending on your OpenStack distribution. This was tested, and it is required for OpenStack-Ansible deployment.
We chose to leave the file empty since it was filling the rabbitmq pool with unnecessary notifications.
event_definition.yaml¶
Note
This may not be needed, depending on your OpenStack distribution. This was tested and it is required for OpenStack-Ansible deployment.
For an example of the event definition file access: https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml
pipeline.yaml¶
Below you can find an example of pipeline configuration that should work in most cases:
sinks:
- name: cpu_sink
publishers:
- gnocchi://?filter_project=service&archive_policy=fleio_policy
- name: network_sink
publishers:
- gnocchi://?filter_project=service&archive_policy=fleio_policy
- name: object_storage_sink
publishers:
- gnocchi://?filter_project=service&archive_policy=fleio_policy
- name: ceph_sink
publishers:
- gnocchi://?filter_project=service&archive_policy=fleio_policy
sources:
- interval: 60
name: cpu_source
meters:
- cpu
- vcpus
sinks:
- cpu_sink
- interval: 60
name: network_source
meters:
- network.incoming.bytes
- network.incoming.packets
- network.outgoing.bytes
- network.outgoing.packets
sinks:
- network_sink
- interval: 60
name: object_storage_source
sinks:
- object_storage_sink
meters:
- storage.objects
- storage.objects.size
- storage.objects.containers
- storage.containers.objects
- storage.containers.objects.size
- interval: 60
name: ceph_source
sinks:
- ceph_sink
meters:
- radosgw.objects
- radosgw.objects.size
- radosgw.objects.containers
- radosgw.api.requests
Please note that you must specify the archive-policy directly from pipeline.yml. With the above configuration you will have measurements for cpu, vcpu, network.incoming.bytes, network.incoming.packets, network.outgoing.bytes, network.outgoing.packets.
Also, in order to correctly configure pipeline.yml, please check the gnocchi.conf to get the project_name. In our case is project_name = service (used in pipeline.yml filter_project=service)
gnocchi_resources.yaml¶
Note
This may not be needed, depending on your OpenStack distribution. This was tested, and it is required for OpenStack-Ansible deployment.
In order to be able to send the new metrics to Gnocchi, we need to make the Gnocchi dispatcher aware of our new metrics.
The default file for your OpenStack version can be used.
polling.yaml¶
The polling.yaml file is used by the ceilometer polling agent to gather and process metrics before sending them to Gnocchi.
Here is an example of polling.yaml file:
sources:
- name: some_pollsters
interval: 300
meters:
- cpu
- cpu_l3_cache
- memory.usage
- network.incoming.bytes
- network.incoming.packets
- network.outgoing.bytes
- network.outgoing.packets
- disk.device.read.bytes
- disk.device.read.requests
- disk.device.write.bytes
- disk.device.write.requests
- hardware.cpu.util
- hardware.memory.used
- hardware.memory.total
- hardware.memory.buffer
- hardware.memory.cached
- hardware.memory.swap.avail
- hardware.memory.swap.total
- hardware.system_stats.io.outgoing.blocks
- hardware.system_stats.io.incoming.blocks
- hardware.network.ip.incoming.datagrams
- hardware.network.ip.outgoing.datagrams
- storage.objects
- storage.objects.size
- storage.objects.containers
- storage.containers.objects
- storage.containers.objects.size
- radosgw.objects
- radosgw.objects.size
- radosgw.objects.containers
- radosgw.api.requests
Notice the storage.* meters. These are used by Fleio to account for and Swift usage.
After setting up Ceilometer, restart the ceilometer-notification-agent and ceilometer-polling and check the metrics. Additional configuration may be required for swift.
See the OpenStack documentation for how to setup Ceilometer to connect to Swift with the ResellerAdmin role to gather data.
Ceph configuration¶
If you are using ceph object store you will need to add the following lines to your ceph.conf
file:
[client.radosgw.gateway]
rgw enable usage log = true
rgw usage log tick interval = 30
rgw usage log flush threshold = 1024
rgw usage max shards = 32
rgw usage max user shards = 1
Kolla ansible OpenStack deployments¶
If your OpenStack is deployed using Kolla Ansible you will need to create /etc/kolla/config/ceilometer
directory and
create polling.yaml
and pipeline.yaml
files in this directory with the contents provided above.
After the files are created you need to reconfigure ceilometer by running the following command:
kolla-ansible reconfigure --tags ceilometer
Note
In some versions of Kolla Ansible some ceilometer containers can be in an unhealthy state. If there are not errors
in log files from /var/log/kolla/ceilometer
directory this should not interfere with Fleio - metrics should still
be collected and available in Fleio.
If you are having issues with the metrics please see Ceilometer troubleshooting.