Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 1931622 - When using API to attach a file when too small for cloud storage, it will still try to get the new attachment from cloud storage generating an error #2362

Merged
merged 1 commit into from
Nov 25, 2024

Conversation

dklawren
Copy link
Collaborator

This change sets the $attachment->{data} value to the currently submitted attachment data. Later this will prevent having to hit the database or net storage for the data if the same object is being used elsewhere. If the object is gone and another one is instantiated, then the data will be loaded from the correct location. This also solves the error in the bug report because the same object is being used later to return results to the client.

…d storage, it will still try to get the new attachment from cloud storage generating an error
@dklawren dklawren requested a review from cgsheeh November 23, 2024 23:25
Copy link
Collaborator

@cgsheeh cgsheeh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WFM.

Question: for the sake of consistency, could we just store all objects in cloud storage? What is the motivation for keeping blobs under 20kb in the database? For an object that small, I would be surprised if there is much of a performance win from a direct DB query compared to loading from cloud storage.

@dklawren
Copy link
Collaborator Author

WFM.

Question: for the sake of consistency, could we just store all objects in cloud storage? What is the motivation for keeping blobs under 20kb in the database? For an object that small, I would be surprised if there is much of a performance win from a direct DB query compared to loading from cloud storage.

It's been 5 years so it is difficult to remember. We wanted to decrease the amount of external calls we would need to make for small attachments from what I recall. To help with performance for the majority of attachments. But yet keep the size of the DB as small as possible. The original bug is Bug 1588221 - Update current s3 storage controller to only upload attachments over a certain size. It would simplify things if we just sent everything to one place or the other. Let me create a bug to investigate the up and down sides.

@dklawren dklawren merged commit 91cf6e8 into mozilla-bteam:master Nov 25, 2024
17 checks passed
@dklawren dklawren deleted the 1931622 branch November 25, 2024 23:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants