Performance problem with your application or database?

Where the problem is? That is the question. Database latency is a performance issue difficult to identify. Find out more about latency and how to test

Where the problem is? Is it your car, the engine, or the highway is not in appropriate conditions for speed? That is the question.

When it comes to performance problems with applications, databases, and networks, it can often be difficult to pinpoint the exact source of the issue. Just like trying to diagnose a problem with a car, it could be a problem with the application itself, the database that it relies on, or the network infrastructure that it runs on. And, just like determining whether the issue is with the car, the engine, or the highway, finding the root cause of a performance problem in your application, database, or network can be a complex task that requires careful analysis and troubleshooting. This article will explore various performance issues that can occur in these areas, and provide guidance on how to identify and address them to ensure that your applications, databases, and networks are running at optimal performance.

In this article, I also share steps on how to test database latency from your local to a remote PostgreSQL database or from your Kubernetes cluster running as a Pod to test intra-cluster latency or inter-cluster.

Check the database latency

Database latency is a performance issue that can be difficult to identify. Latency is the time it takes for data to travel from one point to another, and it can be measured in milliseconds. There are several ways you can try to improve the performance of your application when it comes to database latency. Here are a few expert suggestions you might consider:

Database tuning: Database tuning is the process of configuring a database to achieve the desired performance. The database configuration parameters are usually set at the time of installation. Some parameters can be changed while others are fixed it will depend on your database vendor.

Optimize your database schema and queries: This can often be the most effective way to improve database performance. Make sure you have appropriate indexes in place and consider using techniques like denormalization to reduce the number of queries you need to execute.

Use a faster database engine: If your current database engine is not performing well, you might consider switching to a faster one. Options like Redis, Memcached, and in-memory databases can be significantly faster than traditional disk-based engines.

Cache query results: If you have frequently-requested data that doesn't change very often, you can improve performance by caching the results of those queries in memory. This can reduce the load on your database and improve the speed of your application.

Use a distributed database: If your database is struggling to keep up with the load, you might consider distributing the workload across multiple machines. A decomposition of a database into multiple smaller units that can handle requests individually (Shard management, more about this below). This can be done using a distributed database system like Cassandra, MongoDB or Vitess.

Use a database proxy: A database proxy sits between your application and your database, and can be used to cache queries, balance load across multiple servers, and even automatically tune your schema and queries. Examples:

Open Source:

Trial:

Use a different storage engine: If you are using a relational database, you might consider switching to a different storage engine. For example, the MyISAM engine is generally faster for reading than the InnoDB engine but is less durable.

Scale vertically: If you have a powerful machine with a lot of memory and CPU, you might be able to improve performance by "scaling vertically" - that is, by giving your database more resources to work with.

Shard Management

What happens if the volume of data or the number of requests overwhelms your database?

Shard management is a way of dividing a large database into smaller pieces called "shards." This is often done when a single database is not able to handle the volume of data or the number of requests that it needs to handle.

Imagine you have a giant cookbook with every recipe in the world in it. It would be very difficult to find a specific recipe in such a huge book, and it would also be hard for the person cooking to carry it around. So instead, you might decide to divide the cookbook into smaller books, with each book containing a specific type of recipe (like "desserts" or "vegetarian dishes"). This way, it's easier to find the recipe you're looking for, and it's also easier to carry just one small book around with you when you're cooking.

In the same way, when you have a very large database, it can be difficult to find the information you need and it can also be slow to work with. So you might decide to divide the database into smaller pieces, or shards. This way, it's easier to find the information you need and it's also faster to work with.

Test the latency between your application and database

There are a few different ways you can test the latency between your application and database:

Use a benchmarking tool: There are many tools available that can help you benchmark the performance of your database. Some popular options include sysbech, and pgbench (for PostgreSQL) or for App testing Postman, JMeter, and Apache Bench. These tools allow you to send a large number of queries to your database and measure the time it takes for them to complete.

Use a profiler: A profiler is a tool that can help you understand how your application is interacting with your database. Most database engines have their profilers, which can give you detailed information about the queries being executed and the time it takes for each one to complete.

Use a monitoring tool: Monitoring tools like New Relic, AppDynamics, and Datadog can help you track the performance of your application in real-time. These tools can give you insights into the performance of your database, including latency.

Test manually: If you don't have access to any of these tools, you can test the latency between your application and database manually. One simple way to do this is to use a stopwatch to measure the time it takes for a specific query to execute.

Create your tool and deploy it to test the latency

Finally... I will show you how to containerize a tool to deploy it in the Kubernetes Cluster, allowing you to trigger requests from the cluster where your application is deployed to get and benchmark the request from the same place where your App is deployed.

Based on the example](https://docs.readyset.io/guides/tutorial/#step-4-check-latencies)\ at ReadySet documentation.

Check latency is a containerized tool that can be instantiated within the Docker engine or deployed in a Kubernetes Cluster, allowing you to trigger requests from the cluster where your application is deployed to get and benchmark the request from the same place where your App is deployed.

Start an instance

In the example below, we are executing the benchmark against a Neon serverless database instance, but of course, you can replace the DB_SERVER with your Postgres instance.

We are going to use two different options, 1. with an env-file and 2. with the env variables as arguments.

WARN: Be sure you are using the appropriate parameters for your Postgres database, otherwise, you will get a could not connect to server: Connection refused.

Setting up the variables

DB_SERVER=ab-thisis-adummysrv-123456.us-east-2.aws.neon.tech
DB_USER=the-usr
PASSWD=2ecretPa22w0rd
DB_INSTANCE=db-instance
PROJECT_ID=another-dummy-654321
# create a file with vars
echo DB_SERVER=$DB_SERVER > la-rebelion.txt
echo DB_USER=$DB_USER >> la-rebelion.txt
echo PASSWD=$PASSWD >> la-rebelion.txt
echo DB_INSTANCE=$DB_INSTANCE >> la-rebelion.txt
echo PROJECT_ID=$PROJECT_ID >> la-rebelion.txt
echo URL="postgres://${DB_USER}:${PASSWD}@${DB_SERVER}/${DB_INSTANCE}?sslmode=require&options=project%3D${PROJECT_ID}" >> la-rebelion.txt
echo QUERY="SELECT 1;" >> la-rebelion.txt
echo REPEAT=1000 >> la-rebelion.txt
# take a quick look to the env file content
cat la-rebelion.txt

Run the container

docker run --env-file la-rebelion.txt \
    -d -i --name db-test-la-rebelion \
    adrianescutia/check-latency-db:0.1

# check logs
docker logs -f db-test-la-rebelion

You can also run the container to set up the environment variables in the session and then pass these to the docker run command.

export URL="postgres://${DB_USER}:${PASSWD}@${DB_SERVER}/${DB_INSTANCE}?sslmode=require&options=project%3D${PROJECT_ID}"
export QUERY="SELECT 1;"
export REPEAT=1000
docker run -d -i --name db-test-la-rebelion \
    -e URL \
    -e QUERY \
    -e REPEAT \
    adrianescutia/check-latency-db:0.1

Environment Variables

The check latency image uses only three (3) environment variables, where only REPEAT is optional.

URL

The standard JDBC URL for Postgres connections.

QUERY

Use a representative query that you would like to validate the performance against; an example could be to test a simple SELECT 1; and another query to compare times, which could give a parameter to determine if the problem is latency in the network or maybe your database schema and queries need to be tuned (as explained above).

REPEAT

Used to define the number of times you want the tool to execute the request to the URL.

License

Creative Commons License

Check Latency by La Rebelion Labs is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://github.com/la-rebelion.

Conclusion

In conclusion, running a docker instance to test database latency is an effective way to measure the performance of your database and identify any potential bottlenecks. It allows you to simulate real-world usage conditions and make adjustments to your database schema as needed to improve performance. It is important to remember that a well-designed database schema can greatly impact the overall performance of your application, and should be given careful consideration during the development process. By testing your database in a controlled environment using docker or running this as Pod in your Kubernetes cluster, you can ensure that your application will perform well under heavy load and provide a positive user experience.


Don't forget to subscribe to the distribution list to get the latest updates on Kubernetes and Helm deployments.

Did you find this article valuable?

Support La Rebelion by becoming a sponsor. Any amount is appreciated!