7 min read

Using Redis cache in Spring Boot

December 13, 2020

In the previous article, We discussed how we can enable simple caching implementation. But, there were few drawbacks to in-memory caching. This is where Redis cache store comes into the picture.

Redis is a high performance open source cache store that keeps cached content in-memory for quick retrieval of data. Redis can be used as a cache store, database even a message broker. We will concentrate on the caching aspects in this article. Redis provides basic set of data-types such as Strings, Hashes, Lists, Sets and Streams. We can even store binary content such as the byte data of Serialized Java objects.

Before going further, I would like you to read about basics of caching in my previous post about Understanding Spring Boot Caching with an example. This would give you a better understanding of the basics of caching.

For the examples, I will run the redis server using a docker image. It is up to you if you want to set up a standalone instance. Here is my docker compose file that initializes a redis image along with redis-commander, a GUI for exploring redis data store.

version: '3'
    container_name: redis
    hostname: redis
    image: redis
      - "6379:6379"
    container_name: redis-commander
    hostname: redis-commander
    image: rediscommander/redis-commander:latest
    restart: always
      - REDIS_HOSTS=local:redis:6379
      - "8081:8081"

The above step is only for testing things locally. For production, please follow official Redis documentation for the server installation.

Run the following command for starting the containers.

docker-compose docker-compose.yml up -d

Once redis and redis-commander is up, You can open http://localhost:8081/ in your browser to see the redis-commander window, and you should see the following screen or similar.

Redis commander GUI window

Setting up project

Like always, Spring boot makes things easier with the help of sophisticated starter dependencies. All you have to do is to add the redis starter.


For those who have to point the application to a dedicated redis server, You need to use the following properties with appropriate properties.


If the redis server is running on local, You are pretty much setup already. By default, Spring Boot will try to connect to localhost on port 6379 without a password. This means that we don’t have to configure anything in the application properties.

Setup caching annotations.

To enable caching, we need to do three things.

  1. Add @EnableCaching to one of your configuration classes. (Preferably the main class which is annotated with @SpringBootApplication)
  2. Add @Cacheable annotation to the methods for which you need to enable caching.
  3. Optionally, Add an @CacheEvict annotation when you need to clear the cached object.

Here is a simple example that demonstrates all the above.

public class ItemService {
    private static final Logger logger = LoggerFactory.getLogger(ItemService.class);
    private final ItemRepository itemRepository;

    public ItemService(ItemRepository itemRepository) {
        this.itemRepository = itemRepository;

    public List<Item> items() {
        return itemRepository.findAll();

    @Cacheable(value = "items", key = "#id")
    public Item getItem(Integer id) {
        Item item = itemRepository.findById(id).orElseThrow(RuntimeException::new);
        logger.info("Loading data from DB {}", item);
        return item;

    public Item createItem(Item item) {
        return itemRepository.save(item);

    @CacheEvict(value = "items", key = "#id")
    public Item updateItem(Integer id, Item request) {
        Item item = getItem(id);
        return itemRepository.save(item);

We discussed in detail about these annotations and their behaviour in the post about Understanding Spring Boot Caching. You may not want to miss out on the information shared over there. You can find the source code for @Controller and @Repository classes in the github repository linked below. For the sake of simplicity, I am not including them here.

Redis in Action

Once we start the application, we can see that the first request to http://localhost:8080/items/2 returns response from the database, and a cache entry is made in redis server. We know this because,

  1. The logs show that the id 2 was loaded from the database.
c.s.e.s.cache.service.ItemService:Loading data from DB Item{id=2,productName='Pants Large',price=21.99}
  1. When we open redis-commander, we can see that items cache containing entry with key 2 . Redis Commander View Cache These cached value is in the form of binary. Don’t worry, They are just the item objects converted as java serialized objects stored as bytes.

What about the cache evict? Let’s try updating item 2.

curl -X PUT \
  http://localhost:8080/items/2 \
  -H 'cache-control: no-cache' \
  -H 'content-type: application/json' \
  -d '{
    "productName": "Pants Large",
    "price": 14.99
HTTP/1.1 200
Content-Type: application/json
Transfer-Encoding: chunked
Date: Sun, 13 Dec 2020 18:11:16 GMT

{"id":2,"productName":"Pants Large","price":14.99} 

Redis command view after evict

As you see the cached value for key 2 is now gone. The next time a getItem call happens, the values will be cached again.

Caching between multiple application instances

One problem we faced with in-memory cache is that the cached values are local to that specific application. This means that the eviction will not happen for other instances. Because now all applications will point to a single redis server, Eviction request from one server will result in all servers getting the updated results next round. Let’s see this again with an example scenario.

Let’s run the same application again on different port 7070 (using —server.port=7070).

We can confirm that the second instance had successfully connected to the redis server by various means.

  1. By checking logs. There should not be any error when RedisAutoConfiguration takes place.
  2. By checking the client connections on the server. Click the local (redis:6379:0) element at the side nav of redis-commander, and it will give the server stats. In the results, find the value for Connected clients. The value should be 3. This is due to 2 connections from the application and 1 connection from the redis-commander itself.

Let’s call the API from application running on port 8080.

curl -X GET http://localhost:8080/items/2

This creates a cache entry in redis. Now try hitting server on 7070 for the same resource.

curl -X GET http://localhost:7070/items/2

In the logs of application running on port 7070, There is no significance of data being loaded from the database. This tells us that the data cached by application running on 7070 is being used by application running on 8080.

Timeout for cached values

One more difference between redis and inmemory cache is that, Redis can evict cached entries based on time-to-live(TTL). In order to enable this feature you need to add the following parameter to your application.properties.


The above configuration will automatically evict any cached entry after 5 minutes.

What we have learned?

  1. Redis can help provide robust cache store with minimal change to your project.
  2. Redis acts as a central cache for all of your instances thus there is no cache poisoning (inconsistent cache values between different servers).
  3. A new tool called redis-commander for exploring the content of redis cache store and how to set it up.

Things to consider

  1. Cacheable objects must be Serializable.

    • The reason is due to how redis stores java objects. The safest way to store objects outside JVM is to write them into serialized bytes. To do that, those classes must implement Serializable.
  2. Try not to cache large objects.

    • Even though redis server is separate from application server, large objects in the cache will cause performance issues.
  3. Make sure all applications using the same cache are at the same version.

    • Cached objects created by application with the version A may not be compatible to the applications with the version B. These type of situations will yield unpredictable results which are not good for business.
  4. Application restarts don’t affect cache stored in redis.

    • Unlike in-memory caching, redis data store is outside of application JVM. This means that the cached data is available even after the restart of an application.

finally, Redis is one of the many officially supported cache solution by Spring Boot. Try the rest and chose the one that fits best for you.

The code and the docker-compose.yml for this example is available in the github repository below.