User Details
- User Since
- Nov 4 2014, 4:29 PM (572 w, 1 d)
- Availability
- Available
- IRC Nick
- bblack
- LDAP User
- BBlack
- MediaWiki User
- BBlack (WMF) [ Global Accounts ]
Thu, Oct 16
Aug 12 2025
@Krinkle - Diving into the specifics a bit (I think all of this will be clearer if you make a prototype VCL patch for phase 1 maybe), and re-stating/asking things for clarity:
Jul 15 2025
Jun 3 2025
Jun 2 2025
That doesn't sound right to me.
May 27 2025
FYI from the edge TLS stats POV (the first graph in the description), if we ignore the null results and just look at TLSv1.2 / (TLSv1.2 + TLSv1.3), in the ~year since this ticket was created: the TLSv1.2 percentage has dropped from ~3.76% to ~2.78%. We still have quite a ways to go before we reach a comfortable number even in that simplistic analysis.
May 20 2025
May 7 2025
Apr 14 2025
Thank you!
Apr 11 2025
@bd808 - Can you review the proposed fixups in the MR above? Thank you!
Apr 8 2025
Apr 7 2025
I don't think this seems very problematic from our perspective. If anything, it will reduce traffic from agents which formerly followed the whole 307 chain, and now may stop earlier on a 200.
Jan 8 2025
Dec 20 2024
Opened L3SC request for Legal review after the holidays: https://app.asana.com/0/1202858056100648/1209024900407685/f
Dec 13 2024
Dec 10 2024
Dec 9 2024
Dec 4 2024
Also, probably the way to standardize this for sanity (avoiding ORIGIN mistakes on both ends) is to follow some simple rules that:
Seems like a net win to me. Reduces some error-prone process stuff and makes life simpler!
Oct 25 2024
Ah interesting! We should confirm that and perhaps avoid the set-cookie entirely on cookies that are (or at least are intended to be) ~ SameSite=Lax|Strict then, I guess?
I don't think it's necessarily always up to us to be able to know it's cross-origin, though, right? It would depend on the $random_other_site's CORS whether they tell us about a referrer at all?
Oct 24 2024
Do we have a specific example of a URL and which cookies triggered the rejections? In my own quick repro attempt, I only saw them failing on actually cross-domain traffic (in my case, an enwiki page was loading Math SVG content from https://wikimedia.org/api/..., and it was the cookies coming with that response that were rejected).
Seems like all of these Varnish-level cookies mentioned at the top should at least gain appropriate, explicit SameSite= attributes, in addition to perhaps Partitioned as appropriate (only NetworkProbeLimit currently carries a SameSite attribute at all).
Oct 11 2024
Sep 20 2024
Jul 23 2024
Note also Digicert's annual renewal is coming soon in T368560 . We should maybe look at whether the OCSP URI is optional in the form for making the cert, and turn it off (assuming they also have CRLs working fine). Or if they're not ready for this, I guess Digicert waits another year.
Firefox has historically been the reason we've been stapling OCSP for the past many years. If our certificate has an OCSP URI in its metadata, then Firefox will check OCSP in realtime (which is a privacy risk) unless our servers staple the OCSP to the TLS negotiation (which we do!). This applies to both our Digicert and LE unified certs (and I'm sure some other lesser cases as well!).
Jul 2 2024
^ While we can maintain the VCL-level hacks for now, it would be best to both dig into how this actually happened (most likely, we ourselves emitted donate.m links from a wiki, probably donatewiki itself?), and to come up with a permanent solution at the application layer (fix the wiki to support these links properly and directly). We don't want to keep accumulating hacks like these in our already-overly-complex VCL code if unwarranted.
Jun 27 2024
Note there was some phab/brain lag here, I wrote this before I saw joe's last response above, they overlap a bunch
Jun 7 2024
Jun 4 2024
Jun 3 2024
Re: "same logic" - they're different protocols, different hierarchies, and much different on the client behavior front as well. It doesn't make sense to share a strategy between the two.
May 31 2024
Yes, from a resiliency POV, in some senses keeping unicasts in the mix is an answer (and it's the answer we currently rely on). In a world with only very smart and capable resolvers, the simplest answer probably is the current setup. And indeed, not-advertising ns2 from the core DCs would be a very slight resiliency win over that.
Yeah my general thinking was get ns1-anycast going first, and then figure out any of the above about better resiliency before we consider withdrawing ns0-unicast completely.
Yeah, I've looked at this from the deep-ntp-details POV and it's all pretty sane. We're in alignment with the recommendations in https://www.rfc-editor.org/rfc/rfc8633.html#page-17 and it should result in good time sync stability.
May 30 2024
On that future discussion topic (sorry I'm getting nerdsniped!) - Yeah, I had thought about prepending (vs the hard A/B cutoff) as well, but I tend to think it doesn't offer as much resiliency as the clean split.
Re: anycast-ns1 and future plans, etc (I won't quote all the relevant bits from both msgs above):
May 25 2024
There are brand/identity dilution and confusion issues with using any of *.wiki in an official capacity, especially as canonical redirectors for Wikipedia itself, which is why we didn't start using these many years ago when they were first offered for free.
May 23 2024
May 22 2024
I'm a little leery of dropping the TTL really-short. I get the argument for the normal case, but we also have to consider the possibility that something out there on the Internet could cause traffic surges to some of these URLs and we'd lose some amount of caching defenses against it with a short TTL (esp if we're also no longer pregenerating them, making such traffic more-expensive on the inside). Re-routing sounds better? Or perhaps even-better would be a full-on redirect to the new parsoid URL paths?
May 17 2024
What a fun deep-dive! :)
May 16 2024
May 15 2024
May 14 2024
Also similarly T214998
T215071 <- throwing this in here for semi-related context. Maybe we can align on a potential common future URI scheme anyways, while not actually yet tackling that one.
May 10 2024
Should be all set, may take up to ~30 minutes for changes to propagate.
May 8 2024
The patch should fix things up, let me know if there's still problems after ~half an hour to let the change propagate through the systems.
May 3 2024
We could choose to use subdivision-level mapping in cases where it makes sense.
Mar 27 2024
Mar 26 2024
Jan 19 2024
We discussed this in Traffic earlier this week, and I ended up implementing what I think is a reasonable solution already, so now I've made this ticket for the paper trail and to cover the followup work to debianize and usefully-deploy it. The core code for it is published at https://github.com/blblack/tofurkey .
Dec 5 2023
The perf issues are definitely relevant for traffic's use of haproxy (in a couple of different roles). Your option (making a libssl1.1-dev for bookworm that tracks the sec fixes that are still done for the bullseye case, and packaging our haproxy to build against it) would be the easiest path from our POV, for these cases.
Nov 29 2023
Followup: did a 3-minute test of the same pair of parameter changes on cp3066 for a higher-traffic case. No write failures detected via strace in this case (we don't have the error log outputs to go by in 9.1 builds). mtail CPU usage at 10ms polling interval was significantly higher than it was in ulsfo, but still seems within reason overall and not saturating anything.
I went on a different tangent with this problem, and tried to figure out why we're having ATS fail writes to the notpurge log pipe in the first place. After some hours of digging around this problem (I'll spare you endless details of temporary test configs and strace outputs of various daemons' behavior, etc), these are the basic issues I see:
Nov 22 2023
Nov 9 2023
Nov 7 2023
I don't suspect it serves any real purpose at present, unless it was to avoid some filtering that exists elsewhere to avoid cross-site sharing of /32 routes or something.
Oct 19 2023
Oct 16 2023
One potential issue with relying solely on MSS reduction is that, obviously, it only affects TCP. For now this is fine, as long as we're only using LVS (or future liberica) for TCP traffic (I think that's currently the case for LVS anyways!), but we could add UDP-based things in the future (e.g. DNS and QUIC/HTTP3), at which point we'll have to solve these problems differently.
Could we take the opposite approach with the MTU fixup for the tunneling, and arrange the host/interface settings on both sides (the LBs and the target hosts) such that they only use a >1500 MTU on the specific unicast routes for the tunnels, but default to their current 1500 for all other traffic? If per-route MTU can usefully be set higher than base interface MTU, this seems trivial, but even if not, surely with some set of ip commands we could set the iface MTU to the higher value, while clamping it back down to 1500 for all cases except the tunnel.
Oct 11 2023
Oct 3 2023
Looks about right to me!
We could add some normalization function at the ferm or puppet-dns-lookup layer perhaps (lowercase and do the zeros in a consistent way)?
Sep 27 2023
Sep 25 2023
To clarify and expand on my position about this thread count parameter (which is really just a side-issue related to this ticket, which is fundamentally complete):
Sep 22 2023
Adding to the confusion: historically, we once used the hostname cp1099 back in 2015 for a one-off host: T96873 - therefore that name already exists in both phab and git history, confusingly.
Reading a little deeper on this, I think we still have a hostnames issue. If those other 8 hosts are indeed being brought from ulsfo+eqsin. Those 8 hosts, I presume, would be 1091-8, and so these hosts should start at 1099, not 1098?
@VRiley-WMF - Sukhbir's out right now, but I've updated the racking plan on his behalf!