Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new method GetItemsAndFlush #179

Open
oGabrielArruda opened this issue Sep 4, 2024 · 4 comments
Open

Add new method GetItemsAndFlush #179

oGabrielArruda opened this issue Sep 4, 2024 · 4 comments

Comments

@oGabrielArruda
Copy link

We are facing a situation where we need to flush the entire cache while capturing a snapshot of its exact state before doing so.

Using the Itemsfunction followed by Flush is not concurrent safe because a Set operation could occur between the two calls, resulting in the Flush call mistakenly deleting newly added items.

A workaround would be to use a mutex that prevents writes while these two sequential calls are being made. However, it is much simpler and more efficient to use the library's existing mutex.

@oGabrielArruda
Copy link
Author

oGabrielArruda commented Sep 4, 2024

Just opened a PR #180

@arp242
Copy link

arp242 commented Sep 12, 2024

I wonder if there isn't a more generic way to implement this. What you want is:

c := cache.New()

c.Lock()

c.Items()
c.Flush()

c.Unlock()

But the "Items()" and "Flush()" are pretty specific to your use case. It's probably not appropriate to include that in a generic library. Someone else might want "Items(), "delete these six specific keys", "increment this one value" in there. Or something.

Something like the Lock() and Unlock() methods like above, or:

cache.Locked(func() {
   cache.Items()
   cache.Flush()
})

Would be much better.

Would be rather tricky to implement well though.

Maintaining your own wrapper with its own mutex would probably be better, although with a small performance hit due to an extra mutex, but that's in the nanosecond range so probably not an issue unless you really use it in a tight loop.

@oGabrielArruda
Copy link
Author

I truly liked the idea of the cache.Locked function!
It doesn’t seem tricky to implement, I mean, what's tricky in providing a function that first acquires the lock, executes a received callback function, and then releases the lock?

func (c *cache) Locked(callback func()) {
      c.mu.Lock()
      defer c.mu.Unlock()

      callback()
}

But now I am thinking more about the problem, and still don't think it's so specific. People often want to take a snapshot and clean the cache. A minor change maybe is returning the map when the Flush function is called. If you want to use it, okay, if you don’t, that's not a problem either.

func (c *cache) Flush() map[string]Item {
	c.mu.Lock()
        snapshot := c.items // no need to deep copy
	c.items = map[string]Item{}
	c.mu.Unlock()

        return snapshot
}

I can implement both in fact, but I think returning the map is way more simple, although I doubt Patrick reads these issues nowadays haha

@arp242
Copy link

arp242 commented Sep 12, 2024

In your example, if you call c.Items() in the the callback then that will also lock, but since it's already locked from the Locked() call it will hang forever as the lock will never be released.

You can't really use a regular mutex for this; I've been thinking about a nice way to solve these kind of issues (not the first time I've ran into it), but haven't really gotten around to actually implementing something.

A minor change maybe is returning the map when the Flush function is called.

Yes, I agree that would be good. Unfortunately also incompatible.

I doubt Patrick reads these issues nowadays

Probably not, no. But I've been maintaining a fork at https://github.com/arp242/zcache and keep an eye on things here. So what I'm really talking about is the best way to solve your use-case in my fork 😅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants