Skip to content

Conversation

@swamirishi
Copy link
Contributor

What changes were proposed in this pull request?

Currently when RDBStoreCodecBufferIterator returns a keyValue to a caller the KeyValue may not be consistent and could have been modified when the next() value is invoked from the iterator. The codecBuffer returned from the first call may have been modified and thus also making this entire Iterator implementation not thread safe.

The proposal here is to have a pool of Buffers and return a closeable handle of KeyValue to the caller which would be only released when the handle to the codec buffers is released.

What is the link to the Apache JIRA

https://issues.apache.org/jira/browse/HDDS-12742

How was this patch tested?

Added unit test for the new implementation

…or prefix based iteration

Change-Id: I3acf410885571d0b26da9b89a801d9f2f9070e33
…sistency

Change-Id: Ieb7a356cde868208f197d55b21889a733575d1dd
@szetszwo
Copy link
Contributor

szetszwo commented Jan 2, 2026

@swamirishi , questions:

  • How this is related to HDDS-14154 and its subtasks?
  • This sounds like a bug. However, why it seems not causing any problems in the current code?

@swamirishi
Copy link
Contributor Author

  • How this is related to HDDS-14154 and its subtasks?

This is part of the spliterator implementation requirement wherein we would be using this iterator to just get the value from rocksdb one at a time and asynchronously use this CodecBuffer for deserialization parallely across threads.

@swamirishi , questions:

  • How this is related to HDDS-14154 and its subtasks?
  • This sounds like a bug. However, why it seems not causing any problems in the current code?

RDBStoreAbstractIterator is an internal iterator currently deserialization ensures a pojo is always created from the CodecBuffer. However if we want to collect multiple CodecBuffer values we cannot do so currently since calling next updates the same CodecBuffer again.

Copy link
Contributor

@szetszwo szetszwo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is part of the spliterator implementation requirement ...

For spliterator, we need a design first. Please on hold for this.

As mentioned, the previous design is inefficient.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants