Testing Dwarfguard and supported devices Last updated: July 12, 2024 09:47
In this chapter, we describe the types of testing we perform on Dwarfguard and how and what device types are tested.
Functionality testing
Functionality testing focuses of end-to-end testing of features. The point is to be able to test each and every feature of the product to make sure the supported use cases are working as expected with every release that is done.
To illustrate the functionlity testing process, here are some of the real test cases:
- Installing Dwarfguard agent on a device, device data being sent to the server and presented to the user.
- Alert functionality including setting up new alert, editing existing alert, correct alert evaluation and notification being sent to a channel.
- Using Devices tables including configuring layout od the table via Settings, sorting and filtering on column values.
- Device naming and labeling, using the lables for sorting/filtering.
- Dashboard showing all Monitoring groups (including creating new groups) and displaying correct data including list of raised alerts and listing of triggered devices.
- Agent Profiles update, propagation to the real agents.
- Access to the logging information and in-app help.
- basically any other feature like Changes Tracker, Webtunnels etc.
By saying the tests are end-to-end it means that every test includes all the steps that are happening in the real situation whenever possible. Most of the use cases begins with an event happening on a device or devices. An example could be device temperature rising (above threshold for an alarm to be triggered). Whenever feasible, real devices are used and the events triggered. When the event happening is of a low chance or it would take a lot of time and effort to trigger the situation we want to test, we simply emulate the situation - sent the data that were captured from a real situation to the server to simulate the event happening.
Another typical start for many scenarios is user requesting a configuration change in the system. E.g. when a user updates a value in agent profile definition and we expect all the devices that have this agent profile assigned to update and start behaving according to this new value in a timely fashion.
The above are examples of the test case "staring end". As the end-to-end name suggests, a next step for the testing is the check if the other end shows or behaves as expected. Examples would be an updated value shown in the device details, an alert notification delivered on Slack or devices reporting the data more frequently after configuration change.
Whenever the other end does not show what is expected, either failing to do it in appropriate time or behaving differently than expected, investigation of the test failure starts by inspecting in detail all parts of the system.
Once all of the functionality tests are successfully concluded and no case found failing, the functionlity testing is deemed finished.
That said, not every release type has the same range of functionality testing. While a major or minor release calls for complete functionality retest, the testing may be limited for a patchlevel release, depending on the number of changes/fixes that are introduced. More on that can be found at Releasing process
Performance testing
Performance testing
Device testing
The device testing is usually part of the functionality testing that is mentioned above. That said, while for some of the test cases it does make sense to test the case for each and every supported device type, for others it does not.
Also, the device types has differencies in both:
- some device types do provide functionality / metrics that are not available on other divce types
- has different featureset supported (see below)
While the supported featureset differs per device type, there is always a minimal featureset supported for all device types. This includes:
- Reporting basic metrics to the Dwarfguard server.
- Overview of the metrics (monitoring) in Device Details and Devices table.
- Alerting on any of the alertable metric (alerting), including notifications.
- Configuration of the agent - assigning agent profile, prescribing report intervals in seconds and so on.
- All server-only functionality that is not dependednt on the device itself like all organization techniques like Monitoring Groups, Naming, Labelling, Changes Tracker etc.
On top of the minimal functionality, the other features may or may not be supported for a specific device type. The most complete feature support is on Advantech cellular routers and generic Linux devices as these two were first ones introduced in Dwarfguard. Basically, any functionality present in the generic Linux devices agent or Advantech cellular router agent can be introduced into another device type.
The current set of supported device types (alphabetically):
- Advantech cellular router - tested on v2+ devices.
- Linux-generic devices - tested on Debian GNU/Linux (x86-64 but it is architecture-agnostic).
- OpenWRT - tested on Raspberry Pi.
- Teltonika celluar routers - tested on RUT9x devices.
As the agent is written for a bash shell (and a very basic version at that), it is quite easy for us to add support of another device type which is based on a GNU/Linux operating system.
Upgrade testing
The upgrade from older version of Dwarfguard to a newer version, in particular the auto-upgrade, needs to handle a lot of steps and also keep as much of the original data as possible.
While it is not possible to test each and every combination of possibilities our customers are using the Dwarfguard software because we simply do not have their devices and data, every supported upgrade path is thoroughly tested.
For the whole system, there are a few key elements and all of these needs to be tested:
- Upgrade of the application itself
- Upgrade of the database scheme and existing data
- Upgrade of the agents running on devices
The first step - upgrade of the application itself - is actually quite simple to test and make sure everything behaves correctly as there are no (or only very slight) differencies in different deployments.
The second step is somewhat more complicated by the fact that each deployment data is unique so while it is possible to identify and test the usual use cases, some of the deployment may use some specific setup that is not covered by upgrade testing.
The last step is even more complicated because the upgrade of an agent can happen only after the application upgrade is finished and the device (running the old agent) contacts server when sending new data. Because some of the devices may actually be offline for a considerably long time period, it may happen that upgrading the agent by two (or even more) versions is needed.
Because of the complexity described above, we usually support one upgrade path and recommend our customers to run upgrades as soon a they are available. If you e.g. leave out a step in the upgrade path, the upgrade check will fail and thus you will still need to run all the previous upgrade steps to be able to run the last one. Example:
- running Dwarfguard 0.8.1
- not updating to 0.8.2
- attempting to upgrade 0.8.1 -> 0.8.3 will fail
- two consecutive upgrade operations (0.8.1 -> 0.8.2 followed by 0.8.2 to 0.8.3) are required