Ceilometer configuration

Ceilometer is a required OpenStack services for Fleio. It gathers data and metrics about services and sends it to Gnocchi used by Fleio for billing.

Please check the Ceilometer ceilometer.conf, pipeline.yaml , polling.yaml and the gnocchi_resources.yaml to make sure settings are similar to what’s shown below.

ceilometer.conf

Under the [DEFAULT] section, make sure the dispatcher is set o gnocchi

[DEFAULT]
...
event_dispatchers = gnocchi
meter_dispatchers = gnocchi
...

Under the [api] section also make sure gnocchi is enabled

...
[api]
gnocchi_is_enabled = True
...

Fleio expects a specific Gnocchi archive policy. Create a new archive policy named fleio_policy and set it as default (fleio_policy is just an example, you can use any name you want)

gnocchi archive-policy create -d granularity:5m,points:290 -d granularity:30m,points:336 -d granularity:1h,points:768 -m max -m mean -m sum fleio_policy
gnocchi archive-policy-rule create -a fleio_policy -m * default

The [notifications] settings should be similar to:

[notification]
# Notification queues to listen on
# Ceilometer needs to connect to it's own notification RabbitMQ vhost and to all RabbitMQ vhosts that exists for all services
# Do not confuse a RabbitMQ vhost with normal virtual host or server. RabbitMQ can have it's own vhosts like /swift /nova, /cinder as shown below.
# If only one vhost exists, there is no need to add multiple messaging_urls
messaging_urls = rabbit://ceilometer:password@rabbitip:5672//ceilometer?
messaging_urls = rabbit://swift:password@rabbitip:5672//swift
messaging_urls = rabbit://nova:password@rabbitip:5672//nova?
messaging_urls = rabbit://cinder:password@rabbitip:5672//cinder?
messaging_urls = rabbit://neutron:password@rabbitip:5672//neutron?
store_events = False
workers = 3  # set to a value according to your load

event_pipeline.yaml

We chose to leave the file empty since it was filling the rabbitmq pool with unnecessary notifications.

event_definition.yaml

For an example of the event definition file access:
https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml

pipeline.yaml

Bellow you can find an example of pipeline configuration that should work in most cases:

sinks:
  - name: cpu_sink
    publishers:
      - gnocchi://?filter_project=service&archive_policy=fleio_policy
  - name: network_sink
    publishers:
      - gnocchi://?filter_project=service&archive_policy=fleio_policy
sources:
  - interval: 60
    meters:
      - cpu
      - vcpus
    name: cpu_source
    sinks:
      - cpu_sink
  - interval: 60
    meters:
      - network.incoming.bytes
      - network.incoming.packets
      - network.outgoing.bytes
      - network.outgoing.packets
    name: network_source
    sinks:
      - network_sink

Please note that you must specify the archive-policy directly from pipeline.yml. With the above configuration you will have measurements for cpu, vcpu, network.incoming.bytes, network.incoming.packets, network.outgoing.bytes, network.outgoing.packets.

Also, in order to correctly configure pipeline.yml, please check the gnocchi.conf to get the project_name. In our case is project_name = service (used in pipeline.yml filter_project=service)

gnocchi_resources.yaml

In order to be able to send the new metrics to gnocchi, we need to make the gnocchi dispatcher aware of our new metrics.

The default file for your OpenStack version can be used.

polling.yaml

The polling.yaml file is used by the ceilometer polling agent to gather and process metrics before sending them to gnocchi.

Here is an example of polling.yaml file:

sources:
    - name: some_pollsters
      interval: 300
      meters:
        - cpu
        - cpu_l3_cache
        - memory.usage
        - network.incoming.bytes
        - network.incoming.packets
        - network.outgoing.bytes
        - network.outgoing.packets
        - disk.device.read.bytes
        - disk.device.read.requests
        - disk.device.write.bytes
        - disk.device.write.requests
        - hardware.cpu.util
        - hardware.memory.used
        - hardware.memory.total
        - hardware.memory.buffer
        - hardware.memory.cached
        - hardware.memory.swap.avail
        - hardware.memory.swap.total
        - hardware.system_stats.io.outgoing.blocks
        - hardware.system_stats.io.incoming.blocks
        - hardware.network.ip.incoming.datagrams
        - hardware.network.ip.outgoing.datagrams
        - ip.floating
        - storage.objects
        - storage.objects.size
        - storage.objects.containers
        - storage.containers.objects
        - storage.containers.objects.size

Notice the ip.floating and storage.* meters. These are used by Fleio later to account for floating IPs and Swift usage.

After setting up Ceilometer, start the ceilometer-notification-agent and ceilometer-polling and check the metrics. Additional configuration may be required for swift.

See the OpenStack documentation for how to setup Ceilometer to connect to Swift with the ResellerAdmin role to gather data.

Troubleshooting

If you properly configured Ceilometer / gnocchi, the metrics should show correctly in the instance’s “Metrics” tab. However, if there are no metrics shown even after a few hours after your deployed the instance, you might have some configuration issues.

Run the following commands:

Measures:
    gnocchi measures show --resource-id=`gnocchi resource list | grep instance_network_interface | tail -n 1 | awk '{print $2}'` --aggregation=max --granularity=3600  network.incoming.bytes
Metric:
    gnocchi metric show --resource-id=`gnocchi resource list | grep instance_network_interface | tail -n 1 | awk '{print $2}'` network.incoming.bytes
Archive-policy settings:
    gnocchi archive-policy show `gnocchi metric show --resource-id=\`gnocchi resource list | grep instance_network_interface | tail -n 1 | awk '{print $2}'\` network.incoming.bytes | grep archive_policy | awk '{print $4}'`

If the measures command returns “Aggregation method ‘max’ at granularity ‘3600.0’ for metric xxxxxx does not exist (HTTP 404)” you must check the archive policy granulation. You should have 1h granulation defined, which was configured with:

gnocchi archive-policy create -d granularity:5m,points:290 -d granularity:30m,points:336 -d granularity:1h,points:768 -m max -m mean -m sum your_policy

The next step is to run the metrics command and check if your resource is using the correct policy. In the output you should be able to identify the archive_policy/name and you must assure that is the correct one.

If all is correct by now, you need to run the third command. This should confirm you that your archive policy does have the correct aggregation methods and it has the correct granularity. If it doesn’t have the aggregation_methods max, sum, mean and the correct granularity (shown bellow), please check your configuration.

Bellow you will find the correct archive policy:

+---------------------+-----------------------------------------------------------------+
| Field               | Value                                                           |
+---------------------+-----------------------------------------------------------------+
| aggregation_methods | max, sum, mean                                                  |
| back_window         | 0                                                               |
| definition          | - points: 290, granularity: 0:05:00, timespan: 1 day, 0:10:00   |
|                     | - points: 336, granularity: 0:30:00, timespan: 7 days, 0:00:00  |
|                     | - points: 768, granularity: 1:00:00, timespan: 32 days, 0:00:00 |
| name                | your policy                                                     |
+---------------------+-----------------------------------------------------------------+

Another thing that needs to be checked is the pipeline.yaml file (see pipeline.yaml). In that file, you should check the publishers archive policy (in our example described in the configuring guide is fleio_policy: gnocchi://?filter_project=service&archive_policy=fleio_policy). This does not needs to be persistent across all regions, but it should be the same in both ceilometer’s pipeline.yaml file and in gnocchi.

If they are different, we recommend to clear all the metrics with ceilometer disabled by running this command:

set -e; set -x; for host in $(ssh $(awk '/utility/ {print $NF}' /etc/hosts) "source openrc; openstack hypervisor list -f value -c 'Hypervisor Hostname'"); do ssh $host systemctl stop ceilometer-polling; ssh $host systemctl stop ceilometer-agent-notification; done; ssh $(awk '/utility/ {print $NF}' /etc/hosts) "source openrc; gnocchi archive-policy-rule delete default"; ssh $(awk '/utility/ {print $NF}' /etc/hosts) "source openrc; gnocchi archive-policy create -d granularity:5m,points:290 -d granularity:30m,points:336 -d granularity:1h,points:768 -m max -m mean -m sum fleio; gnocchi archive-policy-rule create -a fleio -m \\* default"; for gnocchi_id in $(ssh $(awk '/utility/ {print $NF}' /etc/hosts) "source openrc; gnocchi resource list -f value -c id"); do ssh $(awk '/utility/ {print $NF}' /etc/hosts) "source openrc; gnocchi resource delete $gnocchi_id"; done; for host in $(ssh $(awk '/utility/ {print $NF}' /etc/hosts) "source openrc; openstack hypervisor list -f value -c 'Hypervisor Hostname'"); do ssh $host systemctl start ceilometer-polling; ssh $host systemctl start ceilometer-agent-notification; done

After that, you will have to follow the guide for Ceilometer configuration.