Please do not use those online tools which claim to measure how good is your content blocker, they are often flawed.
Here I show that the despite blocking unconditionally all network requests (through dynamic filtering rules), the tool still reports uBO as "85%".
The flaw in the current case is that the tool is unable to understand that uBO redirects many blocked network requests to a local resource, in order to lower likelihood of website breakage and to prevent detection to defuse anti-content blockers.
Once you disable all redirections using the filter "@@*$redirect-rule", you then get "100%", but at the cost of higher likelihood of website breakage and content blocker detection.
Such tools can also makes network requests to destinations which never happen in practice, in the real world.
For instance, the tool above presents "amazonaax .com" as being related to Amazon. It's not. Also no reports of it ever being used to be an ad or tracking server.
In short, that sort of slick tool will mislead you into making the wrong decisions regarding your content blocker of choice.
Recommending these as a reference tool to people trying to make a decision regarding which content blocker to install is a disservice to them.
Filter list maintainers are already quite burdened with dealing with real, actual filter issues, they shouldn't have to spend on made up issues by such dubious online tools.
The right way to report remote servers which one believe should be blocked is to report to filter list maintainers: all blockers offer easy way to do this, and entries added to these lists in widespread use will protect the largest number of people.
Typical erroneous conclusion as a result of using this tool, seen on Reddit:
When I run that tool on my own personal browser profile, in which uBO is set to hard blocking mode (https://github.com/gorhill/uBlock/wiki/Blocking-mode:-hard-mode…), I get terrible results: reportedly "142 not blocked", despite that only 10 network requests were actually fired, with 5 of them being blocked.
Users report a site with filtering issues, volunteers spend time to investigate a fix: Based.
Users report THAT site with filtering issues, volunteers spend time to investigate a fix: Cheat.
There is "fraud", it's just not where you think it is.
https://x.com/MiaHerbert00/status/1878115116647858388…
For the record, the author of that "tool" engaged us nearly 2 years ago: https://www.reddit.com/r/uBlockOrigin/comments/12gfix8/… The issue is more than just no access to APIs, the methodology is flawed as explained in the Brave blog post.
Consider adblock-tester\.com: It ranks "Total Adblock" first in its "in-depth comparison of top ad blockers" (scores "100"), and recommends it. Sample of reviews of "Total Adblock" in the Chrome Web Store:
A legitimate approach to evaluating content blockers is to see what happens when measuring with actual, real-world webpages, and academia has resources to do this, see various papers: https://github.com/gorhill/uBlock/wiki/Scientific-papers…
My bad, tweet removed. @GrapheneOS doesn't "peddle" those "adblocker test" sites. I quoted a tweet which didn't make it clear they also see such "tools" as "extremely flawed", as seen in this Sep 24 tweet. The disagreement is around using the word "cheat".
https://x.com/GrapheneOS/status/1831877968895078464…
Ad blocking test at https://d3ward.github.io/toolz/adblock is extremely flawed. It tests domains which are not used for ads/tracking and doesn't take into account that mainstream ad blocking is blocking specific paths hosted at those domains. Mainstream ad blockers also cheat at these tests.