You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Q1: how to acquire the offset of Kafka when nodes restart
When using Kafka as the WAL backend, but the machine of the Writer crash such as node00, then I set up a new node remotely such as node01, then the node01 will fetch all SSTs from s3. But how does node01 acquire the really offset of Kafka?
Q2: function of Local cache in CloudLogControllerImpl
How does the function of the cache directory on the local machine in CloudLogControllerImpl::GetCacheDir class?
The text was updated successfully, but these errors were encountered:
astor-oss
changed the title
Using
Using Kafka as WAL backend, Really Not lost data when node crash?
Jun 27, 2021
A1: Your code has to store the kafka-offset in your rocksdb database. It is outside the rocksdb-cloud code because rockdb-cloud is not aware of kafka at all
A1: Your code has to store the kafka-offset in your rocksdb database. It is outside the rocksdb-cloud code because rockdb-cloud is not aware of kafka at all
Thanks for your attention. When WAL contains delete, but WAL has not persisted to SST or S3, the node crash, It will cause data loss?
Q1: how to acquire the offset of Kafka when nodes restart
When using Kafka as the WAL backend, but the machine of the Writer crash such as
node00
, then I set up a new node remotely such asnode01
, then thenode01
will fetch all SSTs from s3. But how doesnode01
acquire the really offset of Kafka?Q2: function of Local cache in
CloudLogControllerImpl
How does the function of the cache directory on the local machine in
CloudLogControllerImpl::GetCacheDir
class?The text was updated successfully, but these errors were encountered: