You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When utilizing the Python Lighthouse library to run Lighthouse reports, discrepancies are observed in the scores compared to those obtained through the DevTools interface for individual categories. This variance poses a challenge in ensuring consistency between the scores obtained via DevTools and Python Lighthouse.
Issue:
Upon running Lighthouse reports via Python Lighthouse library in PyCharm terminal, significant score variations are noted in comparison to the scores acquired from DevTools when running individual categories (e.g., Performance or Best Practices) for desktop devices. The observed variations undermine the reliability and consistency of the scores obtained through Python Lighthouse, raising concerns about the accuracy of the reported metrics.
What did you expect?
How can I ensure that the scores obtained from both DevTools and Python Lighthouse match exactly? Currently, most runs are producing inaccurate scores.
Below I have attached the screenshot score for performance, accessibility and best practices
Devtool report for performance
Pycharm terminal score for performance
Devtool report for Accessibility
Pycharm terminal score for Accessibility
Devtool report for best-practices
Pycharm terminal score for best-practices
What have you tried?
I have tried to match with browser version, light house version and updated new node version.
How were you running Lighthouse?
CLI, Chrome DevTools
Lighthouse Version
11.6.0
Chrome Version
124.0
Node Version
v20.12.2
OS
Windows 10 Enterprise
Relevant log output
No response
The text was updated successfully, but these errors were encountered:
FAQ
URL
https://www.amazon.in/
What happened?
Description:
When utilizing the Python Lighthouse library to run Lighthouse reports, discrepancies are observed in the scores compared to those obtained through the DevTools interface for individual categories. This variance poses a challenge in ensuring consistency between the scores obtained via DevTools and Python Lighthouse.
Issue:
Upon running Lighthouse reports via Python Lighthouse library in PyCharm terminal, significant score variations are noted in comparison to the scores acquired from DevTools when running individual categories (e.g., Performance or Best Practices) for desktop devices. The observed variations undermine the reliability and consistency of the scores obtained through Python Lighthouse, raising concerns about the accuracy of the reported metrics.
What did you expect?
How can I ensure that the scores obtained from both DevTools and Python Lighthouse match exactly? Currently, most runs are producing inaccurate scores.
Below I have attached the screenshot score for performance, accessibility and best practices
Devtool report for performance
Pycharm terminal score for performance
Devtool report for Accessibility
Pycharm terminal score for Accessibility
Devtool report for best-practices
Pycharm terminal score for best-practices
What have you tried?
I have tried to match with browser version, light house version and updated new node version.
How were you running Lighthouse?
CLI, Chrome DevTools
Lighthouse Version
11.6.0
Chrome Version
124.0
Node Version
v20.12.2
OS
Windows 10 Enterprise
Relevant log output
No response
The text was updated successfully, but these errors were encountered: