top of page
Writer's picturesandeepseeram

Observability Driven Development (ODD) with OpenTelemetry & Tracetest

Observability Driven Development (ODD) is a software development methodology that focuses on the ability to observe and analyze the behavior of a system in production.


Observability Driven Development is considered as an evolutionary approach that emphasizes on back-end code instrumentation. Microservices & Service Oriented Architecture (SOA) brings in a lot of complexity, when it comes to understanding and tracing service-to-service communication. ODD emphasizes on maintaining and ensuring visibility for the system's internal state and understanding the behavior of the system in order to optimize it and make it robust.


The goal of ODD is to enable teams to continuously monitor, measure, and improve the system's performance. This includes optimizing performance metrics, such as latency, throughput, and availability. ODD also helps development teams quickly identify and address any issues that may arise, such as errors or failures.


ODD helps development teams to quickly identify and address issues, as well as to build systems that are capable of handling complex workloads and that can scale as needed. It is particularly useful in environments where the system must be able to adapt to changes quickly, such as those that involve a large amount of data or significant computational demand.


We now have a culture of trace-based testing. With a tool like Tracetest, back-end developers are not just generating E2E tests from OpenTelemetry-based traces, but changing the way we enforce quality—and encourage velocity—in even the most complex applications.



Tracetest works with your current OpenTelemetry based system. It has native integrations with Jaeger, Grafana Tempo, OpenSearch, and SignalFx.


At the core of ODD is Distributed Tracing, which records the paths taken by an HTTP request as it propagates through the applications and API’s you are running on your cloud-native infrastructure. Each operation in a trace is represented as a span, to which you can add assertions that are testable values to determine if the span succeeds or fails. Unlike traditional API tools, trace-based testing asserts against the system response and the trace results.


OpenTelemetry provides code instrumentation for several programming languages, here is the latest list of languages supported as of this writing:



For Example Automatic instrumentation with Python uses a Python agent that can be attached to any Python application. It dynamically injects bytecode to capture telemetry from many popular libraries and frameworks.


Setup:

pip install opentelemetry-distro \
        opentelemetry-exporter-otlp

opentelemetry-bootstrap -a install


Once the agent is installed, it's highly configurable.


opentelemetry-instrument \
    --traces_exporter console,otlp \
    --metrics_exporter console \
    --service_name your-service-name \
    --exporter_otlp_endpoint 0.0.0.0:4317 \
    python myapp.py

Alternatively, you can also use environment variables to configure the agent:


OTEL_SERVICE_NAME=your-service-name \
OTEL_TRACES_EXPORTER=console,otlp \
OTEL_METRICS_EXPORTER=console \
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=0.0.0.0:4317
opentelemetry-instrument \

python myapp.py This instrumentation acts as a wrapper around the rest of your application, running a tracer and sending traces to the OpenTelemetry Collector, which in turn passes them on to Tracetest.


How Tracetest works?


Tracetest works in two parts. First, it reads data from either the OpenTelemetry Collector or a JavaScript application that is instrumented for tracing. Then, it checks the data for correctness and validates the format. If it detects any errors, it reports them back to the user.


Tracetest also provides a way to simulate a live environment and test nearly every aspect of tracing. Using Tracetest's built-in APIs, developers can quickly and easily create realistic scenarios and simulate how their tracing code would behave in a production environment. Tracetest Installation on CentOS 7


# add repository
cat <<EOF | $SUDO tee /etc/yum.repos.d/tracetest.repo
[tracetest]
name=Tracetest
baseurl=https://yum.fury.io/tracetest/
enabled=1
gpgcheck=0
EOF

# install
sudo yum install tracetest –refresh

Provides two options to run TraceTest

- Docker Compose

- Kubernetes


Once, installed… start tracetest server using


docker-compose -f tracetest/docker-compose.yaml up -d 

and browse to the TraceTest URL running on your localhost port 11633

You can create and run tests both via web UI as well as Command CLI.


Click the Test tab, then Add Test Spec, to start setting assertions, which form the backbone of how you implement ODD and track the overall quality of your application in various environments.


To make an assertion based on the GET / span of our trace, select that span in the graph view and click Current span in the Test Spec modal. Or, copy this span selector directly, using the Tracetest Selector Language.


Summary


It is designed to simulate real-world conditions and can be used to benchmark performance, detect regressions, and validate instrumentation. Tests can be customized to focus on specific components such as spans, metrics, and logs. Tracetest also provides detailed reporting on the results of tests, including traces, metrics, logs, and other metrics captured during the test run. The data gathered can be used to identify performance bottlenecks, diagnose issues, and ensure that instrumentation is sending the correct data.


307 views

Recent Posts

See All

Commentaires


Les commentaires ont été désactivés.
bottom of page