sure B<cpufreqd> (L<http://cpufreqd.sourceforge.net/>) or a similar tool is
installed and an "cpu governor" (that's a kernel module) is loaded.
+ If the system has the I<cpufreq-stats> kernel module loaded, this plugin reports
+ the rate of p-state (cpu frequency) transitions and the percentage of time spent
+ in each p-state.
+
=head2 Plugin C<cpusleep>
This plugin doesn't have any options. It reads CLOCK_BOOTTIME and
=back
+ =head2 Plugin C<gpu_nvidia>
+
+ Efficiently collects various statistics from the system's NVIDIA GPUs using the
+ NVML library. Currently collected are fan speed, core temperature, percent
+ load, percent memory used, compute and memory frequencies, and power
+ consumption.
+
+ =over 4
+
+ =item B<GPUIndex>
+
+ If one or more of these options is specified, only GPUs at that index (as
+ determined by nvidia-utils through I<nvidia-smi>) have statistics collected.
+ If no instance of this option is specified, all GPUs are monitored.
+
+ =item B<IgnoreSelected>
+
+ If set to true, all detected GPUs B<except> the ones at indices specified by
+ B<GPUIndex> entries are collected. For greater clarity, setting IgnoreSelected
+ without any GPUIndex directives will result in B<no> statistics being
+ collected.
+
+ =back
+
=head2 Plugin C<grpc>
The I<grpc> plugin provides an RPC interface to submit values to or query
Address "127.0.0.1"
Socket "/var/run/openvswitch/db.sock"
Bridges "br0" "br_ext"
+ InterfaceStats false
</Plugin>
The plugin provides the following configuration options:
Default: empty (monitor all bridges)
+=item B<InterfaceStats> B<false>|B<true>
+
+Indicates that the plugin should gather statistics for individual interfaces
+in addition to ports. This can be useful when monitoring an OVS setup with
+bond ports, where you might wish to know individual statistics for the
+interfaces included in the bonds. Defaults to B<false>.
+
=back
=head2 Plugin C<pcie_errors>
<Node "example">
Host "localhost"
Port "6379"
+ #Socket "/var/run/redis/redis.sock"
Timeout 2000
ReportCommandStats false
ReportCpuUsage true
connections. Either a service name of a port number may be given. Please note
that numerical port numbers must be given as a string, too.
+ =item B<Socket> I<Path>
+
+ Connect to Redis using the UNIX domain socket at I<Path>. If this
+ setting is given, the B<Hostname> and B<Port> settings are ignored.
+
=item B<Password> I<Password>
Use I<Password> to authenticate when connecting to I<Redis>.
if there is only one package and C<pkgE<lt>nE<gt>-coreE<lt>mE<gt>> if there is
more than one, where I<n> is the n-th core of package I<m>.
+ =item B<RestoreAffinityPolicy> I<AllCPUs>|I<Restore>
+
+ Reading data from CPU has side-effect: collectd process's CPU affinity mask
+ changes. After reading data is completed, affinity mask needs to be restored.
+ This option allows to set restore policy.
+
+ B<AllCPUs> (the default): Restore the affinity by setting affinity to any/all
+ CPUs.
+
+ B<Restore>: Save affinity using sched_getaffinity() before reading data and
+ restore it after.
+
+ On some systems, sched_getaffinity() will fail due to inconsistency of the CPU
+ set size between userspace and kernel. In these cases plugin will detect the
+ unsuccessful call and fail with an error, preventing data collection.
+ Most of configurations does not need to save affinity as Collectd process is
+ allowed to run on any/all available CPUs.
+
+ If you need to save and restore affinity and get errors like 'Unable to save
+ the CPU affinity', setting 'possible_cpus' kernel boot option may also help.
+
+ See following links for details:
+
+ L<https://github.com/collectd/collectd/issues/1593>
+ L<https://sourceware.org/bugzilla/show_bug.cgi?id=15630>
+ L<https://bugzilla.kernel.org/show_bug.cgi?id=151821>
+
=back
=head2 Plugin C<unixsock>
Only I<Connection> is required.
+ Consider the following example config:
+
+ <Plugin "virt">
+ Connection "qemu:///system"
+ HostnameFormat "hostname"
+ InterfaceFormat "address"
+ PluginInstanceFormat "name"
+ </Plugin>
+
+ It will generate the following values:
+
+ node42.example.com/virt-instance-0006f26c/disk_octets-vda
+ node42.example.com/virt-instance-0006f26c/disk_ops-vda
+ node42.example.com/virt-instance-0006f26c/if_dropped-ca:fe:ca:fe:ca:fe
+ node42.example.com/virt-instance-0006f26c/if_errors-ca:fe:ca:fe:ca:fe
+ node42.example.com/virt-instance-0006f26c/if_octets-ca:fe:ca:fe:ca:fe
+ node42.example.com/virt-instance-0006f26c/if_packets-ca:fe:ca:fe:ca:fe
+ node42.example.com/virt-instance-0006f26c/memory-actual_balloon
+ node42.example.com/virt-instance-0006f26c/memory-available
+ node42.example.com/virt-instance-0006f26c/memory-last_update
+ node42.example.com/virt-instance-0006f26c/memory-major_fault
+ node42.example.com/virt-instance-0006f26c/memory-minor_fault
+ node42.example.com/virt-instance-0006f26c/memory-rss
+ node42.example.com/virt-instance-0006f26c/memory-swap_in
+ node42.example.com/virt-instance-0006f26c/memory-swap_out
+ node42.example.com/virt-instance-0006f26c/memory-total
+ node42.example.com/virt-instance-0006f26c/memory-unused
+ node42.example.com/virt-instance-0006f26c/memory-usable
+ node42.example.com/virt-instance-0006f26c/virt_cpu_total
+ node42.example.com/virt-instance-0006f26c/virt_vcpu-0
+
+ You can get information on the metric's units from the online libvirt documentation.
+ For instance, I<virt_cpu_total> is in nanoseconds.
+
=over 4
=item B<Connection> I<uri>
same guest across migrations.
B<hostname> means to use the global B<Hostname> setting, which is probably not
- useful on its own because all guests will appear to have the same name.
+ useful on its own because all guests will appear to have the same name. This is
+ useful in conjunction with B<PluginInstanceFormat> though.
You can also specify combinations of these fields. For example B<name uuid>
means to concatenate the guest name and UUID (with a literal colon character
=over 4
+ =item B<Host> I<Host>
+
+ Bind to the hostname / address I<Host>. By default, the plugin will bind to the
+ "any" address, i.e. accept packets sent to any of the hosts addresses.
+
+ This option is supported only for libmicrohttpd newer than 0.9.0.
+
=item B<Port> I<Port>
Port the embedded webserver should listen on. Defaults to B<9103>.
=back
+ =head2 Plugin C<write_stackdriver>
+
+ The C<write_stackdriver> plugin writes metrics to the
+ I<Google Stackdriver Monitoring> service.
+
+ This plugin supports two authentication methods: When configured, credentials
+ are read from the JSON credentials file specified with B<CredentialFile>.
+ Alternatively, when running on
+ I<Google Compute Engine> (GCE), an I<OAuth> token is retrieved from the
+ I<metadata server> and used to authenticate to GCM.
+
+ B<Synopsis:>
+
+ <Plugin write_stackdriver>
+ CredentialFile "/path/to/service_account.json"
+ <Resource "global">
+ Label "project_id" "monitored_project"
+ </Resource>
+ </Plugin>
+
+ =over 4
+
+ =item B<CredentialFile> I<file>
+
+ Path to a JSON credentials file holding the credentials for a GCP service
+ account.
+
+ If B<CredentialFile> is not specified, the plugin uses I<Application Default
+ Credentials>. That means which credentials are used depends on the environment:
+
+ =over 4
+
+ =item
+
+ The environment variable C<GOOGLE_APPLICATION_CREDENTIALS> is checked. If this
+ variable is specified it should point to a JSON file that defines the
+ credentials.
+
+ =item
+
+ The path C<${HOME}/.config/gcloud/application_default_credentials.json> is
+ checked. This where credentials used by the I<gcloud> command line utility are
+ stored. You can use C<gcloud auth application-default login> to create these
+ credentials.
+
+ Please note that these credentials are often of your personal account, not a
+ service account, and are therefore unfit to be used in a production
+ environment.
+
+ =item
+
+ When running on GCE, the built-in service account associated with the virtual
+ machine instance is used.
+ See also the B<Email> option below.
+
+ =back
+
+ =item B<Project> I<Project>
+
+ The I<Project ID> or the I<Project Number> of the I<Stackdriver Account>. The
+ I<Project ID> is a string identifying the GCP project, which you can chose
+ freely when creating a new project. The I<Project Number> is a 12-digit decimal
+ number. You can look up both on the I<Developer Console>.
+
+ This setting is optional. If not set, the project ID is read from the
+ credentials file or determined from the GCE's metadata service.
+
+ =item B<Email> I<Email> (GCE only)
+
+ Choses the GCE I<Service Account> used for authentication.
+
+ Each GCE instance has a C<default> I<Service Account> but may also be
+ associated with additional I<Service Accounts>. This is often used to restrict
+ the permissions of services running on the GCE instance to the required
+ minimum. The I<write_stackdriver plugin> requires the
+ C<https://www.googleapis.com/auth/monitoring> scope. When multiple I<Service
+ Accounts> are available, this option selects which one is used by
+ I<write_stackdriver plugin>.
+
+ =item B<Resource> I<ResourceType>
+
+ Configures the I<Monitored Resource> to use when storing metrics.
+ More information on I<Monitored Resources> and I<Monitored Resource Types> are
+ available at L<https://cloud.google.com/monitoring/api/resources>.
+
+ This block takes one string argument, the I<ResourceType>. Inside the block are
+ one or more B<Label> options which configure the resource labels.
+
+ This block is optional. The default value depends on the runtime environment:
+ on GCE, the C<gce_instance> resource type is used, otherwise the C<global>
+ resource type ist used:
+
+ =over 4
+
+ =item
+
+ B<On GCE>, defaults to the equivalent of this config:
+
+ <Resource "gce_instance">
+ Label "project_id" "<project_id>"
+ Label "instance_id" "<instance_id>"
+ Label "zone" "<zone>"
+ </Resource>
+
+ The values for I<project_id>, I<instance_id> and I<zone> are read from the GCE
+ metadata service.
+
+ =item
+
+ B<Elsewhere>, i.e. not on GCE, defaults to the equivalent of this config:
+
+ <Resource "global">
+ Label "project_id" "<Project>"
+ </Resource>
+
+ Where I<Project> refers to the value of the B<Project> option or the project ID
+ inferred from the B<CredentialFile>.
+
+ =back
+
+ =item B<Url> I<Url>
+
+ URL of the I<Stackdriver Monitoring> API. Defaults to
+ C<https://monitoring.googleapis.com/v3>.
+
+ =back
+
=head2 Plugin C<xencpu>
This plugin collects metrics of hardware CPU load for machine running Xen
#define PORT_NAME_SIZE_MAX 255
#define UUID_SIZE 64
-typedef struct port_s {
- char name[PORT_NAME_SIZE_MAX]; /* Port name */
+typedef struct interface_s {
+ char name[PORT_NAME_SIZE_MAX]; /* Interface name */
char port_uuid[UUID_SIZE]; /* Port table _uuid */
char iface_uuid[UUID_SIZE]; /* Interface table uuid */
char ex_iface_id[UUID_SIZE]; /* External iface id */
char ex_vm_id[UUID_SIZE]; /* External vm id */
- int64_t stats[IFACE_COUNTER_COUNT]; /* Port statistics */
- struct bridge_list_s *br; /* Pointer to bridge */
- struct port_s *next; /* Next port */
+ int64_t stats[IFACE_COUNTER_COUNT]; /* Statistics for interface */
+ struct interface_s *next; /* Next interface for associated port */
+} interface_list_t;
+
+typedef struct port_s {
+ char name[PORT_NAME_SIZE_MAX]; /* Port name */
+ char port_uuid[UUID_SIZE]; /* Port table _uuid */
+ struct bridge_list_s *br; /* Pointer to bridge */
+ struct interface_s *iface; /* Pointer to first interface */
+ struct port_s *next; /* Next port */
} port_list_t;
typedef struct bridge_list_s {
struct bridge_list_s *next; /* Next bridge*/
} bridge_list_t;
+ #define cnt_str(x) [x] = #x
+
static const char *const iface_counter_table[IFACE_COUNTER_COUNT] = {
- [collisions] = "collisions",
- [rx_bytes] = "rx_bytes",
- [rx_crc_err] = "rx_crc_err",
- [rx_dropped] = "rx_dropped",
- [rx_errors] = "rx_errors",
- [rx_frame_err] = "rx_frame_err",
- [rx_over_err] = "rx_over_err",
- [rx_packets] = "rx_packets",
- [tx_bytes] = "tx_bytes",
- [tx_dropped] = "tx_dropped",
- [tx_errors] = "tx_errors",
- [tx_packets] = "tx_packets",
- [rx_1_to_64_packets] = "rx_1_to_64_packets",
- [rx_65_to_127_packets] = "rx_65_to_127_packets",
- [rx_128_to_255_packets] = "rx_128_to_255_packets",
- [rx_256_to_511_packets] = "rx_256_to_511_packets",
- [rx_512_to_1023_packets] = "rx_512_to_1023_packets",
- [rx_1024_to_1522_packets] = "rx_1024_to_1518_packets",
- [rx_1523_to_max_packets] = "rx_1523_to_max_packets",
- [tx_1_to_64_packets] = "tx_1_to_64_packets",
- [tx_65_to_127_packets] = "tx_65_to_127_packets",
- [tx_128_to_255_packets] = "tx_128_to_255_packets",
- [tx_256_to_511_packets] = "tx_256_to_511_packets",
- [tx_512_to_1023_packets] = "tx_512_to_1023_packets",
- [tx_1024_to_1522_packets] = "tx_1024_to_1518_packets",
- [tx_1523_to_max_packets] = "tx_1523_to_max_packets",
- [tx_multicast_packets] = "tx_multicast_packets",
- [rx_broadcast_packets] = "rx_broadcast_packets",
- [tx_broadcast_packets] = "tx_broadcast_packets",
- [rx_undersized_errors] = "rx_undersized_errors",
- [rx_oversize_errors] = "rx_oversize_errors",
- [rx_fragmented_errors] = "rx_fragmented_errors",
- [rx_jabber_errors] = "rx_jabber_errors",
+ cnt_str(collisions),
+ cnt_str(rx_bytes),
+ cnt_str(rx_crc_err),
+ cnt_str(rx_dropped),
+ cnt_str(rx_errors),
+ cnt_str(rx_frame_err),
+ cnt_str(rx_over_err),
+ cnt_str(rx_packets),
+ cnt_str(tx_bytes),
+ cnt_str(tx_dropped),
+ cnt_str(tx_errors),
+ cnt_str(tx_packets),
+ cnt_str(rx_1_to_64_packets),
+ cnt_str(rx_65_to_127_packets),
+ cnt_str(rx_128_to_255_packets),
+ cnt_str(rx_256_to_511_packets),
+ cnt_str(rx_512_to_1023_packets),
+ cnt_str(rx_1024_to_1522_packets),
+ cnt_str(rx_1523_to_max_packets),
+ cnt_str(tx_1_to_64_packets),
+ cnt_str(tx_65_to_127_packets),
+ cnt_str(tx_128_to_255_packets),
+ cnt_str(tx_256_to_511_packets),
+ cnt_str(tx_512_to_1023_packets),
+ cnt_str(tx_1024_to_1522_packets),
+ cnt_str(tx_1523_to_max_packets),
+ cnt_str(tx_multicast_packets),
+ cnt_str(rx_broadcast_packets),
+ cnt_str(tx_broadcast_packets),
+ cnt_str(rx_undersized_errors),
+ cnt_str(rx_oversize_errors),
+ cnt_str(rx_fragmented_errors),
+ cnt_str(rx_jabber_errors),
};
+ #undef cnt_str
+
/* Entry into the list of network bridges */
static bridge_list_t *g_bridge_list_head;
.ovs_db_serv = "6640", /* use default OVS DB service */
};
+/* flag indicating whether or not to publish individual interface statistics */
+static bool interface_stats = false;
+
static iface_counter ovs_stats_counter_name_to_type(const char *counter) {
iface_counter index = not_supported;
plugin_dispatch_values(&vl);
}
- ovs_stats_submit_two(devname, "if_packets", "1024_to_1518_packets",
+static void ovs_stats_submit_interfaces(bridge_list_t *bridge,
+ port_list_t *port) {
+ char devname[PORT_NAME_SIZE_MAX * 2];
+
+ for (interface_list_t *iface = port->iface; iface != NULL;
+ iface = iface->next) {
+ meta_data_t *meta = meta_data_create();
+ if (meta != NULL) {
+ meta_data_add_string(meta, "uuid", iface->iface_uuid);
+
+ if (strlen(iface->ex_vm_id))
+ meta_data_add_string(meta, "vm-uuid", iface->ex_vm_id);
+
+ if (strlen(iface->ex_iface_id))
+ meta_data_add_string(meta, "iface-id", iface->ex_iface_id);
+ }
+ snprintf(devname, sizeof(devname), "%s.%s.%s", bridge->name, port->name,
+ iface->name);
+ ovs_stats_submit_one(devname, "if_collisions", NULL,
+ iface->stats[collisions], meta);
+ ovs_stats_submit_two(devname, "if_dropped", NULL, iface->stats[rx_dropped],
+ iface->stats[tx_dropped], meta);
+ ovs_stats_submit_two(devname, "if_errors", NULL, iface->stats[rx_errors],
+ iface->stats[tx_errors], meta);
+ ovs_stats_submit_two(devname, "if_packets", NULL, iface->stats[rx_packets],
+ iface->stats[tx_packets], meta);
+ ovs_stats_submit_one(devname, "if_rx_errors", "crc",
+ iface->stats[rx_crc_err], meta);
+ ovs_stats_submit_one(devname, "if_rx_errors", "frame",
+ iface->stats[rx_frame_err], meta);
+ ovs_stats_submit_one(devname, "if_rx_errors", "over",
+ iface->stats[rx_over_err], meta);
+ ovs_stats_submit_one(devname, "if_rx_octets", NULL, iface->stats[rx_bytes],
+ meta);
+ ovs_stats_submit_one(devname, "if_tx_octets", NULL, iface->stats[tx_bytes],
+ meta);
+ ovs_stats_submit_two(devname, "if_packets", "1_to_64_packets",
+ iface->stats[rx_1_to_64_packets],
+ iface->stats[tx_1_to_64_packets], meta);
+ ovs_stats_submit_two(devname, "if_packets", "65_to_127_packets",
+ iface->stats[rx_65_to_127_packets],
+ iface->stats[tx_65_to_127_packets], meta);
+ ovs_stats_submit_two(devname, "if_packets", "128_to_255_packets",
+ iface->stats[rx_128_to_255_packets],
+ iface->stats[tx_128_to_255_packets], meta);
+ ovs_stats_submit_two(devname, "if_packets", "256_to_511_packets",
+ iface->stats[rx_256_to_511_packets],
+ iface->stats[tx_256_to_511_packets], meta);
+ ovs_stats_submit_two(devname, "if_packets", "512_to_1023_packets",
+ iface->stats[rx_512_to_1023_packets],
+ iface->stats[tx_512_to_1023_packets], meta);
- devname, "if_packets", "1024_to_1518_packets",
++ ovs_stats_submit_two(devname, "if_packets", "1024_to_1522_packets",
+ iface->stats[rx_1024_to_1522_packets],
+ iface->stats[tx_1024_to_1522_packets], meta);
+ ovs_stats_submit_two(devname, "if_packets", "1523_to_max_packets",
+ iface->stats[rx_1523_to_max_packets],
+ iface->stats[tx_1523_to_max_packets], meta);
+ ovs_stats_submit_two(devname, "if_packets", "broadcast_packets",
+ iface->stats[rx_broadcast_packets],
+ iface->stats[tx_broadcast_packets], meta);
+ ovs_stats_submit_one(devname, "if_multicast", "tx_multicast_packets",
+ iface->stats[tx_multicast_packets], meta);
+ ovs_stats_submit_one(devname, "if_rx_errors", "rx_undersized_errors",
+ iface->stats[rx_undersized_errors], meta);
+ ovs_stats_submit_one(devname, "if_rx_errors", "rx_oversize_errors",
+ iface->stats[rx_oversize_errors], meta);
+ ovs_stats_submit_one(devname, "if_rx_errors", "rx_fragmented_errors",
+ iface->stats[rx_fragmented_errors], meta);
+ ovs_stats_submit_one(devname, "if_rx_errors", "rx_jabber_errors",
+ iface->stats[rx_jabber_errors], meta);
+
+ meta_data_destroy(meta);
+ }
+}
+
+static int ovs_stats_get_port_stat_value(port_list_t *port,
+ iface_counter index) {
+ if (port == NULL)
+ return 0;
+
+ int value = 0;
+
+ for (interface_list_t *iface = port->iface; iface != NULL;
+ iface = iface->next) {
+ value = value + iface->stats[index];
+ }
+
+ return value;
+}
+
+static void ovs_stats_submit_port(bridge_list_t *bridge, port_list_t *port) {
+ char devname[PORT_NAME_SIZE_MAX * 2];
+
+ meta_data_t *meta = meta_data_create();
+ if (meta != NULL) {
+ char key_str[DATA_MAX_NAME_LEN];
+ int i = 0;
+
+ for (interface_list_t *iface = port->iface; iface != NULL;
+ iface = iface->next) {
+ memset(key_str, '\0', DATA_MAX_NAME_LEN);
+ snprintf(key_str, 6, "uuid%d", i);
+ meta_data_add_string(meta, key_str, iface->iface_uuid);
+
+ if (strlen(iface->ex_vm_id)) {
+ memset(key_str, '\0', DATA_MAX_NAME_LEN);
+ snprintf(key_str, 9, "vm-uuid%d", i);
+ meta_data_add_string(meta, key_str, iface->ex_vm_id);
+ }
+
+ if (strlen(iface->ex_iface_id)) {
+ memset(key_str, '\0', DATA_MAX_NAME_LEN);
+ snprintf(key_str, 10, "iface-id%d", i);
+ meta_data_add_string(meta, key_str, iface->ex_iface_id);
+ }
+
+ i++;
+ }
+ }
+ snprintf(devname, sizeof(devname), "%s.%s", bridge->name, port->name);
+ ovs_stats_submit_one(devname, "if_collisions", NULL,
+ ovs_stats_get_port_stat_value(port, collisions), meta);
+ ovs_stats_submit_two(devname, "if_dropped", NULL,
+ ovs_stats_get_port_stat_value(port, rx_dropped),
+ ovs_stats_get_port_stat_value(port, tx_dropped), meta);
+ ovs_stats_submit_two(devname, "if_errors", NULL,
+ ovs_stats_get_port_stat_value(port, rx_errors),
+ ovs_stats_get_port_stat_value(port, tx_errors), meta);
+ ovs_stats_submit_two(devname, "if_packets", NULL,
+ ovs_stats_get_port_stat_value(port, rx_packets),
+ ovs_stats_get_port_stat_value(port, tx_packets), meta);
+ ovs_stats_submit_one(devname, "if_rx_errors", "crc",
+ ovs_stats_get_port_stat_value(port, rx_crc_err), meta);
+ ovs_stats_submit_one(devname, "if_rx_errors", "frame",
+ ovs_stats_get_port_stat_value(port, rx_frame_err), meta);
+ ovs_stats_submit_one(devname, "if_rx_errors", "over",
+ ovs_stats_get_port_stat_value(port, rx_over_err), meta);
+ ovs_stats_submit_one(devname, "if_rx_octets", NULL,
+ ovs_stats_get_port_stat_value(port, rx_bytes), meta);
+ ovs_stats_submit_one(devname, "if_tx_octets", NULL,
+ ovs_stats_get_port_stat_value(port, tx_bytes), meta);
+ ovs_stats_submit_two(devname, "if_packets", "1_to_64_packets",
+ ovs_stats_get_port_stat_value(port, rx_1_to_64_packets),
+ ovs_stats_get_port_stat_value(port, tx_1_to_64_packets),
+ meta);
+ ovs_stats_submit_two(
+ devname, "if_packets", "65_to_127_packets",
+ ovs_stats_get_port_stat_value(port, rx_65_to_127_packets),
+ ovs_stats_get_port_stat_value(port, tx_65_to_127_packets), meta);
+ ovs_stats_submit_two(
+ devname, "if_packets", "128_to_255_packets",
+ ovs_stats_get_port_stat_value(port, rx_128_to_255_packets),
+ ovs_stats_get_port_stat_value(port, tx_128_to_255_packets), meta);
+ ovs_stats_submit_two(
+ devname, "if_packets", "256_to_511_packets",
+ ovs_stats_get_port_stat_value(port, rx_256_to_511_packets),
+ ovs_stats_get_port_stat_value(port, tx_256_to_511_packets), meta);
+ ovs_stats_submit_two(
+ devname, "if_packets", "512_to_1023_packets",
+ ovs_stats_get_port_stat_value(port, rx_512_to_1023_packets),
+ ovs_stats_get_port_stat_value(port, tx_512_to_1023_packets), meta);
+ ovs_stats_submit_two(
++ devname, "if_packets", "1024_to_1522_packets",
+ ovs_stats_get_port_stat_value(port, rx_1024_to_1522_packets),
+ ovs_stats_get_port_stat_value(port, tx_1024_to_1522_packets), meta);
+ ovs_stats_submit_two(
+ devname, "if_packets", "1523_to_max_packets",
+ ovs_stats_get_port_stat_value(port, rx_1523_to_max_packets),
+ ovs_stats_get_port_stat_value(port, tx_1523_to_max_packets), meta);
+ ovs_stats_submit_two(
+ devname, "if_packets", "broadcast_packets",
+ ovs_stats_get_port_stat_value(port, rx_broadcast_packets),
+ ovs_stats_get_port_stat_value(port, tx_broadcast_packets), meta);
+ ovs_stats_submit_one(
+ devname, "if_multicast", "tx_multicast_packets",
+ ovs_stats_get_port_stat_value(port, tx_multicast_packets), meta);
+ ovs_stats_submit_one(
+ devname, "if_rx_errors", "rx_undersized_errors",
+ ovs_stats_get_port_stat_value(port, rx_undersized_errors), meta);
+ ovs_stats_submit_one(devname, "if_rx_errors", "rx_oversize_errors",
+ ovs_stats_get_port_stat_value(port, rx_oversize_errors),
+ meta);
+ ovs_stats_submit_one(
+ devname, "if_rx_errors", "rx_fragmented_errors",
+ ovs_stats_get_port_stat_value(port, rx_fragmented_errors), meta);
+ ovs_stats_submit_one(devname, "if_rx_errors", "rx_jabber_errors",
+ ovs_stats_get_port_stat_value(port, rx_jabber_errors),
+ meta);
+
+ meta_data_destroy(meta);
+}
+
static port_list_t *ovs_stats_get_port(const char *uuid) {
if (uuid == NULL)
return NULL;
return NULL;
}
-static port_list_t *ovs_stats_get_port_by_name(const char *name) {
- if (name == NULL)
+static port_list_t *ovs_stats_get_port_by_interface_uuid(const char *uuid) {
+ if (uuid == NULL)
return NULL;
- for (port_list_t *port = g_port_list_head; port != NULL; port = port->next)
- if ((strncmp(port->name, name, strlen(port->name)) == 0) &&
- strlen(name) == strlen(port->name))
- return port;
+ for (port_list_t *port = g_port_list_head; port != NULL; port = port->next) {
+ for (interface_list_t *iface = port->iface; iface != NULL;
+ iface = iface->next) {
+ if (strncmp(iface->iface_uuid, uuid, strlen(uuid)) == 0)
+ return port;
+ }
+ }
+ return NULL;
+}
+
+static interface_list_t *ovs_stats_get_port_interface(port_list_t *port,
+ const char *uuid) {
+ if (port == NULL || uuid == NULL)
+ return NULL;
+
+ for (interface_list_t *iface = port->iface; iface != NULL;
+ iface = iface->next) {
+ if (strncmp(iface->iface_uuid, uuid, strlen(uuid)) == 0)
+ return iface;
+ }
+ return NULL;
+}
+
+static interface_list_t *ovs_stats_get_interface(const char *uuid) {
+ if (uuid == NULL)
+ return NULL;
+
+ for (port_list_t *port = g_port_list_head; port != NULL; port = port->next) {
+ for (interface_list_t *iface = port->iface; iface != NULL;
+ iface = iface->next) {
+ if (strncmp(iface->iface_uuid, uuid, strlen(uuid)) == 0)
+ return iface;
+ }
+ }
return NULL;
}
+static interface_list_t *ovs_stats_new_port_interface(port_list_t *port,
+ const char *uuid) {
+ if (uuid == NULL)
+ return NULL;
+
+ interface_list_t *iface = ovs_stats_get_port_interface(port, uuid);
+
+ if (iface == NULL) {
+ iface = (interface_list_t *)calloc(1, sizeof(interface_list_t));
+ if (!iface) {
+ ERROR("%s: Error allocating interface", plugin_name);
+ return NULL;
+ }
+ memset(iface->stats, -1, sizeof(int64_t[IFACE_COUNTER_COUNT]));
+ sstrncpy(iface->iface_uuid, uuid, sizeof(iface->iface_uuid));
+ sstrncpy(iface->port_uuid, port->port_uuid, sizeof(iface->port_uuid));
+ pthread_mutex_lock(&g_stats_lock);
+ interface_list_t *iface_head = port->iface;
+ iface->next = iface_head;
+ port->iface = iface;
+ pthread_mutex_unlock(&g_stats_lock);
+ }
+ return iface;
+}
+
/* Create or get port by port uuid */
static port_list_t *ovs_stats_new_port(bridge_list_t *bridge,
const char *uuid) {
ERROR("%s: Error allocating port", plugin_name);
return NULL;
}
- memset(port->stats, -1, sizeof(int64_t[IFACE_COUNTER_COUNT]));
sstrncpy(port->port_uuid, uuid, sizeof(port->port_uuid));
pthread_mutex_lock(&g_stats_lock);
port->next = g_port_list_head;
return;
}
-/* Update port name */
+/* Update port name and interface UUID(s)*/
static int ovs_stats_update_port(const char *uuid, yajl_val port) {
const char *new[] = {"new", NULL};
const char *name[] = {"name", NULL};
sstrncpy(portentry->name, YAJL_GET_STRING(port_name),
sizeof(portentry->name));
pthread_mutex_unlock(&g_stats_lock);
+
+ yajl_val ifaces_root = ovs_utils_get_value_by_key(row, "interfaces");
+ char *ifaces_root_key =
+ YAJL_GET_STRING(YAJL_GET_ARRAY(ifaces_root)->values[0]);
+
+ char *iface_uuid_str = NULL;
+
+ if (strcmp("set", ifaces_root_key) == 0) {
+ // ifaces_root is ["set", [[ "uuid", "<some_uuid>" ], [ "uuid",
+ // "<another_uuid>" ], ... ]]
+ yajl_val ifaces_list = YAJL_GET_ARRAY(ifaces_root)->values[1];
+
+ // ifaces_list is [[ "uuid", "<some_uuid>" ], [ "uuid",
+ // "<another_uuid>" ], ... ]]
+ for (int i = 0; i < YAJL_GET_ARRAY(ifaces_list)->len; i++) {
+ yajl_val iface_tuple = YAJL_GET_ARRAY(ifaces_list)->values[i];
+
+ // iface_tuple is [ "uuid", "<some_uuid>" ]
+ iface_uuid_str =
+ YAJL_GET_STRING(YAJL_GET_ARRAY(iface_tuple)->values[1]);
+
+ interface_list_t *iface =
+ ovs_stats_get_port_interface(portentry, iface_uuid_str);
+
+ if (iface == NULL) {
+ iface = ovs_stats_new_port_interface(portentry, iface_uuid_str);
+ }
+ }
+ } else {
+ // ifaces_root is [ "uuid", "<some_uuid>" ]
+ iface_uuid_str =
+ YAJL_GET_STRING(YAJL_GET_ARRAY(ifaces_root)->values[1]);
+
+ interface_list_t *iface =
+ ovs_stats_get_port_interface(portentry, iface_uuid_str);
+
+ if (iface == NULL) {
+ iface = ovs_stats_new_port_interface(portentry, iface_uuid_str);
+ }
+ }
}
}
}
g_port_list_head = port->next;
else
prev_port->next = port->next;
+
+ for (interface_list_t *iface = port->iface; iface != NULL;
+ iface = port->iface) {
+ interface_list_t *del = iface;
+ port->iface = iface->next;
+ sfree(del);
+ }
+
sfree(port);
break;
}
}
/* Update interface statistics */
-static int ovs_stats_update_iface_stats(port_list_t *port, yajl_val stats) {
+static int ovs_stats_update_iface_stats(interface_list_t *iface,
+ yajl_val stats) {
yajl_val stat;
iface_counter counter_index = 0;
char *counter_name = NULL;
int64_t counter_value = 0;
- if (stats && YAJL_IS_ARRAY(stats))
+ if (stats && YAJL_IS_ARRAY(stats)) {
for (size_t i = 0; i < YAJL_GET_ARRAY(stats)->len; i++) {
stat = YAJL_GET_ARRAY(stats)->values[i];
if (!YAJL_IS_ARRAY(stat))
counter_value = YAJL_GET_INTEGER(YAJL_GET_ARRAY(stat)->values[1]);
if (counter_index == not_supported)
continue;
- port->stats[counter_index] = counter_value;
+
+ iface->stats[counter_index] = counter_value;
}
+ }
return 0;
}
/* Update interface external_ids */
-static int ovs_stats_update_iface_ext_ids(port_list_t *port, yajl_val ext_ids) {
+static int ovs_stats_update_iface_ext_ids(interface_list_t *iface,
+ yajl_val ext_ids) {
yajl_val ext_id;
char *key;
char *value;
- if (ext_ids && YAJL_IS_ARRAY(ext_ids))
+ if (ext_ids && YAJL_IS_ARRAY(ext_ids)) {
for (size_t i = 0; i < YAJL_GET_ARRAY(ext_ids)->len; i++) {
ext_id = YAJL_GET_ARRAY(ext_ids)->values[i];
if (!YAJL_IS_ARRAY(ext_id))
key = YAJL_GET_STRING(YAJL_GET_ARRAY(ext_id)->values[0]);
value = YAJL_GET_STRING(YAJL_GET_ARRAY(ext_id)->values[1]);
if (key && value) {
- if (strncmp(key, "iface-id", strlen(key)) == 0)
- sstrncpy(port->ex_iface_id, value, sizeof(port->ex_iface_id));
- else if (strncmp(key, "vm-uuid", strlen(key)) == 0)
- sstrncpy(port->ex_vm_id, value, sizeof(port->ex_vm_id));
+ if (strncmp(key, "iface-id", strlen(key)) == 0) {
+ sstrncpy(iface->ex_iface_id, value, sizeof(iface->ex_iface_id));
+ } else if (strncmp(key, "vm-uuid", strlen(key)) == 0) {
+ sstrncpy(iface->ex_vm_id, value, sizeof(iface->ex_vm_id));
+ }
}
}
+ }
return 0;
}
/* Get interface statistic and external_ids */
-static int ovs_stats_update_iface(yajl_val iface) {
- if (!iface || !YAJL_IS_OBJECT(iface)) {
- ERROR("ovs_stats plugin: incorrect JSON port data");
+static int ovs_stats_update_iface(yajl_val iface_obj) {
+ if (!iface_obj || !YAJL_IS_OBJECT(iface_obj)) {
+ ERROR("ovs_stats plugin: incorrect JSON interface data");
return -1;
}
- yajl_val row = ovs_utils_get_value_by_key(iface, "new");
+ yajl_val row = ovs_utils_get_value_by_key(iface_obj, "new");
if (!row || !YAJL_IS_OBJECT(row))
return 0;
if (!iface_name || !YAJL_IS_STRING(iface_name))
return 0;
- port_list_t *port = ovs_stats_get_port_by_name(YAJL_GET_STRING(iface_name));
- if (port == NULL)
+ yajl_val iface_uuid = ovs_utils_get_value_by_key(row, "_uuid");
+ if (!iface_uuid || !YAJL_IS_ARRAY(iface_uuid) ||
+ YAJL_GET_ARRAY(iface_uuid)->len != 2)
return 0;
+ char *iface_uuid_str = NULL;
+
+ iface_uuid_str = YAJL_GET_STRING(YAJL_GET_ARRAY(iface_uuid)->values[1]);
+
+ if (iface_uuid_str == NULL) {
+ ERROR("ovs_stats plugin: incorrect JSON interface data");
+ return -1;
+ }
+
+ interface_list_t *iface = ovs_stats_get_interface(iface_uuid_str);
+
+ if (iface == NULL)
+ return 0;
+
+ sstrncpy(iface->name, YAJL_GET_STRING(iface_name), sizeof(iface->name));
+
yajl_val iface_stats = ovs_utils_get_value_by_key(row, "statistics");
yajl_val iface_ext_ids = ovs_utils_get_value_by_key(row, "external_ids");
- yajl_val iface_uuid = ovs_utils_get_value_by_key(row, "_uuid");
+
/*
* {
"statistics": [
}
Check that statistics is an array with 2 elements
*/
+
if (iface_stats && YAJL_IS_ARRAY(iface_stats) &&
YAJL_GET_ARRAY(iface_stats)->len == 2)
- ovs_stats_update_iface_stats(port, YAJL_GET_ARRAY(iface_stats)->values[1]);
+ ovs_stats_update_iface_stats(iface, YAJL_GET_ARRAY(iface_stats)->values[1]);
if (iface_ext_ids && YAJL_IS_ARRAY(iface_ext_ids))
- ovs_stats_update_iface_ext_ids(port,
+ ovs_stats_update_iface_ext_ids(iface,
YAJL_GET_ARRAY(iface_ext_ids)->values[1]);
- if (iface_uuid && YAJL_IS_ARRAY(iface_uuid) &&
- YAJL_GET_ARRAY(iface_uuid)->len == 2 &&
- YAJL_GET_STRING(YAJL_GET_ARRAY(iface_uuid)->values[1]) != NULL)
- sstrncpy(port->iface_uuid,
- YAJL_GET_STRING(YAJL_GET_ARRAY(iface_uuid)->values[1]),
- sizeof(port->iface_uuid));
- else {
- ERROR("ovs_stats plugin: incorrect JSON interface data");
- return -1;
+
+ return 0;
+}
+
+/* Delete interface */
+static int ovs_stats_del_interface(const char *uuid) {
+ port_list_t *port;
+
+ port = ovs_stats_get_port_by_interface_uuid(uuid);
+
+ if (port != NULL) {
+ interface_list_t *prev_iface = NULL;
+
+ for (interface_list_t *iface = port->iface; iface != NULL;
+ iface = port->iface) {
+ if (strncmp(iface->iface_uuid, uuid, strlen(iface->iface_uuid))) {
+
+ interface_list_t *del = iface;
+
+ if (prev_iface == NULL)
+ port->iface = iface->next;
+ else
+ prev_iface->next = iface->next;
+
+ sfree(del);
+ break;
+ } else
+ prev_iface = iface;
+ }
}
return 0;
}
*/
const char *path[] = {"Interface", NULL};
- yajl_val ports = yajl_tree_get(jupdates, path, yajl_t_object);
+ yajl_val interfaces = yajl_tree_get(jupdates, path, yajl_t_object);
pthread_mutex_lock(&g_stats_lock);
- if (ports && YAJL_IS_OBJECT(ports))
- for (size_t i = 0; i < YAJL_GET_OBJECT(ports)->len; i++)
- ovs_stats_update_iface(YAJL_GET_OBJECT(ports)->values[i]);
+ if (interfaces && YAJL_IS_OBJECT(interfaces))
+ for (size_t i = 0; i < YAJL_GET_OBJECT(interfaces)->len; i++) {
+ ovs_stats_update_iface(YAJL_GET_OBJECT(interfaces)->values[i]);
+ }
pthread_mutex_unlock(&g_stats_lock);
return;
}
return;
}
+/* Handle Interface Table delete event */
+static void ovs_stats_interface_table_delete_cb(yajl_val jupdates) {
+ const char *path[] = {"Interface", NULL};
+ yajl_val interfaces = yajl_tree_get(jupdates, path, yajl_t_object);
+ pthread_mutex_lock(&g_stats_lock);
+ if (interfaces && YAJL_IS_OBJECT(interfaces))
+ for (size_t i = 0; i < YAJL_GET_OBJECT(interfaces)->len; i++) {
+ ovs_stats_del_interface(YAJL_GET_OBJECT(interfaces)->keys[i]);
+ }
+ pthread_mutex_unlock(&g_stats_lock);
+ return;
+}
+
/* Setup OVS DB table callbacks */
static void ovs_stats_initialize(ovs_db_t *pdb) {
const char *bridge_columns[] = {"name", "ports", NULL};
ovs_stats_interface_table_result_cb,
OVS_DB_TABLE_CB_FLAG_INITIAL | OVS_DB_TABLE_CB_FLAG_INSERT |
OVS_DB_TABLE_CB_FLAG_MODIFY);
+
+ ovs_db_table_cb_register(pdb, "Interface", interface_columns,
+ ovs_stats_interface_table_delete_cb, NULL,
+ OVS_DB_TABLE_CB_FLAG_DELETE);
}
/* Check if bridge is configured to be monitored in config file */
static void ovs_stats_free_port_list(port_list_t *head) {
for (port_list_t *i = head; i != NULL;) {
port_list_t *del = i;
+
+ for (interface_list_t *iface = i->iface; iface != NULL; iface = i->iface) {
+ interface_list_t *del2 = iface;
+ i->iface = iface->next;
+ sfree(del2);
+ }
+
i = i->next;
sfree(del);
}
}
}
}
+ } else if (strcasecmp("InterfaceStats", child->key) == 0) {
+ if (cf_util_get_boolean(child, &interface_stats) != 0) {
+ ERROR("%s: parse '%s' option failed", plugin_name, child->key);
+ return -1;
+ }
} else {
WARNING("%s: option '%s' not allowed here", plugin_name, child->key);
goto cleanup_fail;
static int ovs_stats_plugin_read(__attribute__((unused)) user_data_t *ud) {
bridge_list_t *bridge;
port_list_t *port;
- char devname[PORT_NAME_SIZE_MAX * 2];
pthread_mutex_lock(&g_stats_lock);
for (bridge = g_bridge_list_head; bridge != NULL; bridge = bridge->next) {
* is called after Interface Table update callback but before
* Port table Update callback. Will add this port on next read */
continue;
- meta_data_t *meta = meta_data_create();
- if (meta != NULL) {
- meta_data_add_string(meta, "uuid", port->iface_uuid);
- if (strlen(port->ex_vm_id))
- meta_data_add_string(meta, "vm-uuid", port->ex_vm_id);
- if (strlen(port->ex_iface_id))
- meta_data_add_string(meta, "iface-id", port->ex_iface_id);
- }
- snprintf(devname, sizeof(devname), "%s.%s", bridge->name, port->name);
- ovs_stats_submit_one(devname, "if_collisions", NULL,
- port->stats[collisions], meta);
- ovs_stats_submit_two(devname, "if_dropped", NULL,
- port->stats[rx_dropped], port->stats[tx_dropped],
- meta);
- ovs_stats_submit_two(devname, "if_errors", NULL,
- port->stats[rx_errors], port->stats[tx_errors],
- meta);
- ovs_stats_submit_two(devname, "if_packets", NULL,
- port->stats[rx_packets], port->stats[tx_packets],
- meta);
- ovs_stats_submit_one(devname, "if_rx_errors", "crc",
- port->stats[rx_crc_err], meta);
- ovs_stats_submit_one(devname, "if_rx_errors", "frame",
- port->stats[rx_frame_err], meta);
- ovs_stats_submit_one(devname, "if_rx_errors", "over",
- port->stats[rx_over_err], meta);
- ovs_stats_submit_one(devname, "if_rx_octets", NULL,
- port->stats[rx_bytes], meta);
- ovs_stats_submit_one(devname, "if_tx_octets", NULL,
- port->stats[tx_bytes], meta);
- ovs_stats_submit_two(devname, "if_packets", "1_to_64_packets",
- port->stats[rx_1_to_64_packets],
- port->stats[tx_1_to_64_packets], meta);
- ovs_stats_submit_two(devname, "if_packets", "65_to_127_packets",
- port->stats[rx_65_to_127_packets],
- port->stats[tx_65_to_127_packets], meta);
- ovs_stats_submit_two(devname, "if_packets", "128_to_255_packets",
- port->stats[rx_128_to_255_packets],
- port->stats[tx_128_to_255_packets], meta);
- ovs_stats_submit_two(devname, "if_packets", "256_to_511_packets",
- port->stats[rx_256_to_511_packets],
- port->stats[tx_256_to_511_packets], meta);
- ovs_stats_submit_two(devname, "if_packets", "512_to_1023_packets",
- port->stats[rx_512_to_1023_packets],
- port->stats[tx_512_to_1023_packets], meta);
- ovs_stats_submit_two(devname, "if_packets", "1024_to_1522_packets",
- port->stats[rx_1024_to_1522_packets],
- port->stats[tx_1024_to_1522_packets], meta);
- ovs_stats_submit_two(devname, "if_packets", "1523_to_max_packets",
- port->stats[rx_1523_to_max_packets],
- port->stats[tx_1523_to_max_packets], meta);
- ovs_stats_submit_two(devname, "if_packets", "broadcast_packets",
- port->stats[rx_broadcast_packets],
- port->stats[tx_broadcast_packets], meta);
- ovs_stats_submit_one(devname, "if_multicast", "tx_multicast_packets",
- port->stats[tx_multicast_packets], meta);
- ovs_stats_submit_one(devname, "if_rx_errors", "rx_undersized_errors",
- port->stats[rx_undersized_errors], meta);
- ovs_stats_submit_one(devname, "if_rx_errors", "rx_oversize_errors",
- port->stats[rx_oversize_errors], meta);
- ovs_stats_submit_one(devname, "if_rx_errors", "rx_fragmented_errors",
- port->stats[rx_fragmented_errors], meta);
- ovs_stats_submit_one(devname, "if_rx_errors", "rx_jabber_errors",
- port->stats[rx_jabber_errors], meta);
--
- meta_data_destroy(meta);
++
+ ovs_stats_submit_port(bridge, port);
+
+ if (interface_stats)
+ ovs_stats_submit_interfaces(bridge, port);
}
} else
continue;