4 years ago by goatsi
The last time this came up on HN it got quite a negative review from someone who had tried it on several sites: https://news.ycombinator.com/item?id=19152145
It apparently attracted automated scanners and the signal to noise ratio was atrocious.
4 years ago by aasasd
People here dislike HackerOne, but afaiu it solves this exact problem. It's the first line of âsupportâ for security reporters.
The fact that the industry currently needs this kind of solution is absurdly comedic. Basically, it would make actual sense to require people to pay ten bucks when posting a reportâif they think the report is reasonable and that they would get paid for it.
4 years ago by tptacek
Itâs a good platform, but it really doesnât solve the time-sink problem, even if you pay for triage services. Triage can knock down well-known patterns of bogus stuff, but so can you; the real problem is the truly wacky stuff people come up with.
4 years ago by fractionalhare
I can see and read the JavaScript source code if I intercept the requests! Please pay me $100.
4 years ago by undefined
4 years ago by selfhoster11
I'd never enter in a paid interaction without some sort of escrow that will make sure I'm not screwed over for the crime of being a good citizen. The victim could just keep the cost of the report without ever refunding. Innocent people will keep getting caught by this even long after the company acquires a reputation for never providing refunds.
4 years ago by aasasd
Since the escrow sees your report, it sounds like HackerOne.
HO will just create paid tiers where for a smallish subscription price, people actually take your reports seriously.
4 years ago by 67868018
Smart contracts on Ethereum blockchain... force them to set a date to pay you and go public. They can't back out.
Going to need a lot of work in the next few years to make something like this viable, but ETH could make it possible
4 years ago by Anunayj
posting your email in a not easily parsable way can save you a lot of spam. (rot13 it, break up characters, etc). Atleast that should cut most of the spam. This might be not standard, but I do not really see why we would need security.txt to be parsable by robots.
4 years ago by orionblastar
I just use GNUPG and encrypt it.
4 years ago by Anunayj
encrypt with my private key?
4 years ago by mike_d
security.txt is a flag that you may have a bug bounty program, and as a result are a potential source of revenue.
It is time arbitrage between big companies taking security seriously (willing to pay large bounties) and that amount being higher than a monthly or yearly wage in some internet-connected regions. If they throw enough nets into the sea all year, eventually one pays off and they end up living quite well.
4 years ago by superkuh
What does it mean when I enabled this on my personal (for fun) websites years ago on a whim? I don't have a bug bounty program.
4 years ago by __throwawy9
Putting all of these files at root is going to be like have old rusty cars and stained mattresses in front of your house years from now.
The web is not a junkyard.
4 years ago by dmurray
The proposal is to place the file at /.well-known/security.txt.
And even if it wasn't, there is plenty of namespace room to put every file someone argues for in a 2-page RFC at root. After all, there are only 1024 low-numbered TCP ports and we haven't run out of those yet.
4 years ago by GoblinSlayer
Trap them with a honeypot: disable HSTS and automatically delete all messages containing "HSTS" string.
4 years ago by 177tcca
Does this work?
4 years ago by coolreader18
A couple of websites/companies that have implemented this (just from checking ones I could think of):
https://www.google.com/.well-known/security.txt
https://www.cloudflare.com/.well-known/security.txt
4 years ago by djcapelis
The beautiful part of these is they show exactly what happens with these types of files, in that only one of them implements the spec as linked.
(Expires isnât optional in the proposal on the website.)
4 years ago by politelemon
Their own security.txt also fails to do this
https://securitytxt.org/.well-known/security.txt
> # If you would like to report a security issue
> # you may report it to us on HackerOne.
> Contact: https://hackerone.com/ed
> Encryption: https://keybase.pub/edoverflow/pgp_key.asc
> Acknowledgements: https://hackerone.com/ed/thanks
4 years ago by nwcs
This was fixed: https://securitytxt.org/.well-known/security.txt
4 years ago by fakename11
I bet these "expired" security.txts will become more common than unexpired ones in the near future. Updating a date every year sounds annoying.
4 years ago by enw
In fact I think requiring an expiry date is a huge negative of the spec and will likely hinder adoption.
An expiry date brings along with it yet another maintenance burden for questionable benefit.
4 years ago by ylyn
Well, the spec only recommends that the date be no further than a year into the future.
So if you really don't want the burden, just set a date in the year 9999 or something.
4 years ago by infogulch
Honestly I think a "last reviewed date" or log of dates would be better because it aligns with the actual action that the hostmaster takes, and thus provides the reader with the most relevant facts instead of an arbitrary promise of future validity.
4 years ago by thw0rted
This. They did a bad job of explaining why they chose an expiration date in the draft RFC[1].
> If information and resources referenced in a "security.txt" file are incorrect or not kept up to date, this can result in security reports not being received by the organization or sent to incorrect contacts, thus exposing possible security issues to third parties.
Yes, the information could change after you write the file. No, it is not possible to know, when you write the file, at what future point the information will become incorrect. The document should have a "last reviewed" date, then the consumer can decide for themselves if it has been updated recently enough to be trustworthy.
1: https://tools.ietf.org/html/draft-foudil-securitytxt-11#sect...
4 years ago by jensenbox
I was literally going to craft a file and plop it on my site until I hit the "required expiration". I understand why it is there but think it should be optional. I think a better idea would be to steal from DNS and use TTL and serial numbers (maybe just standard http last-modified is enough?) - the point is "this stuff might be stale, reprocess it".
The last thing I need is one more thing to have to remember and update.
By the looks of it, a few others feel it is non-critical and have just skipped it too.
4 years ago by JimDabell
> I think a better idea would be to steal from DNS and use TTL and serial numbers (maybe just standard http last-modified is enough?)
HTTP already has an Expires header: https://tools.ietf.org/html/rfc7234#section-5.3
4 years ago by JamisonM
Seems like a better solution that would accomplish the actual goal would be "refresh-after" where you could specify how many days a client should wait until asking again.
Zero maintenance required but still gives a rate-limiting and time window function.
4 years ago by not_knuth
Your comment made me think about using Google Search in a more alternative way to figure out who is hiring at a glance:
https://www.google.com/search?q=hiring+well+known+security+f...
Edit: If one is not up for the 2min it takes to parse some publicly available list.
4 years ago by mttpgn
Facebook and LinkedIn do as well (but Microsoft, Apple, Amazon, Twitter, Yahoo, Netflix, Stackoverflow & Salesforce do not).
4 years ago by nicbou
I'm not sure I buy into the idea, but it couldn't have been sold any better. That security.txt generator is such a great way to get people on board. The whole website is really good at explaining the project.
4 years ago by IgorPartola
Itâs cute but it generates a five line plain text file. I would argue that a better way to sell the idea would be to create an Apache and nginx module so you could specify this stuff from those config files. It would make adoption seem easier to more people.
4 years ago by mjthompson
Serving a 'text file' from a web server module seems to overcomplicate things in my view.
More code || complexity == greater likelihood of bugs (including security bugs).
As ironic as a security bug in a security.txt serving module would be, it's probably best we avoid that possibility and let the ordinary, highly scrutinised file serving code handle it instead.
4 years ago by IgorPartola
If you want adoption, make it so easy that itâs harder to not include it. If you run an application, it will take lines of config to serve this file anyways. Might as well make it easy. And if you canât make a bug free module that writes out 5 lines into a static file⌠probably shouldnât be defining web standards.
4 years ago by coolreader18
I think the text file generation is great - it is a standardized "syntax", so being able to just fill out your info in a webpage and getting the .txt to upload to a server (instead of having to "learn" the couple of keys to use for your values) really does make it painless.
4 years ago by danaris
A lot of people have web hosting packages that don't give them direct access to the webserver configuration, but do let them upload arbitrary text files to their webroot.
4 years ago by IgorPartola
Nothing prevents you from writing this file by hand. But for people who use shared hosting having a security.txt file is likely not as important as companies that run their own infrastructure. And adding yet another file to be deployed into the web root or to be served by your application is likely a touch more work than enabling the module and adding a line or three to the web server config. None of it is a lot of work, but in a sense using the web serverâs config file is a lot less friction.
4 years ago by dec0dedab0de
That would make things much easier especially to have the expiration date automatically update. But also to set a default for all sites on a server.
It would also be nice to have libraries for the popular frameworks
4 years ago by IgorPartola
Yes! Make this a Django app and I will use it by default. Make me add it as a text file and Iâll forget to add it.
4 years ago by aasasd
One aspect that is not reflected in this format is that the site/company might have a specific routine for reporting vulns. When I happened to write to Node (iirc) about some potential problem, the mail was just redirected to HackerOne, converted to some kind of a draft, and I got an automatic response saying I need to create an account there. In true marginal-nerd fashion, I have some opinions on which of my email addresses go where, so the account remains uncreated and the problem unreported by me. And Node didn't specify anywhere that this reporting works through HackerOne.
(I also realize that this comment is probably not the right place to complain about the format, but eh.)
4 years ago by Y_Y
I'd have responded the same way.
I'm not creating an account just to do someone else a favour.
I will send an email and that's it. If you don't have an email address then you're not getting a message from me.
It's disappointing how frequently this comes up.
4 years ago by yarcob
Yeah, worst part is when some support engineer asks you to please post the bug to a bug tracker, but the bug tracker requires an account, and when you try to sign up they make you wait for someone to review your account, and at some point you wonder if these people ever get a single bug report from a customer.
4 years ago by aaronmdjones
This is what the Policy: key in the format is for.
4 years ago by aasasd
You're right, that would work.
Though the UX designer in me thinks that if the policy is important, it would be better to put in up on a webpage and slap that into the âcontactâ fieldâas a neighbor comment suggests. At least when the whole process turns out to sidestep email completely.
4 years ago by sedatk
You can put a URL on the contact field.
4 years ago by aasasd
Ah! Indeed, I missed that alternative.
4 years ago by achillean
This is already used by quite a few organizations/ websites:
https://beta.shodan.io/search/report?query=http.securitytxt%...
We've been indexing it for a while now and we haven't seen the number of websites that support it change significantly. It would make notifying organizations easier if this was a more widely-adopted standard. This is how it looks like when you pull an IP that has a service with that file:
4 years ago by nwcs
(One of the authors here)
Make sure you read through the actual latest draft (especially section 6): https://tools.ietf.org/html/draft-foudil-securitytxt-11
Also, we are in the end stages of the IETF approval process so this should be official later this year (if all goes well): https://datatracker.ietf.org/doc/draft-foudil-securitytxt/
4 years ago by geekone
why is there no expires field on https://securitytxt.org/.well-known/security.txt
4 years ago by Old_Thrashbarg
Strangely, the draft just shows up as an empty page for me in Firefox, but Chromium works fine.
4 years ago by pmh
It's likely some kind of caching issue. tools.ietf.org at 4.31.198.62 responds with the draft, but 64.170.98.42 404s
4 years ago by jwilk
This one works for me:
https://datatracker.ietf.org/doc/html/draft-foudil-securityt...
4 years ago by dexterdog
Yes, it's throwing a 404 for me on firefox as well.
4 years ago by nocman
It did that for me at first, but it was due to my impatience.
I tried again and waited, and after 10 or 15 seconds, the page finally loaded.
4 years ago by mekster
That's not impatience, the page is broken, unless your network is super slow.
4 years ago by cmsefton
Not sure if there's something new here, but this has popped up before on HN.
https://news.ycombinator.com/item?id=19151213 (2019) https://news.ycombinator.com/item?id=15416198 (2017)
4 years ago by loloquwowndueo
A top-level security.txt sounds like a better idea than hiding it under .well-known. I wouldnât want anyone without access to the web serverâs root to be telling me what the security policy is, anyway.
Having it at top level makes it a sibling / analogous to robots.txt so there is some consistency to the pattern.
4 years ago by willeh
`.well-known` is already used for validation for many things, perhaps most crucially acme-challenge which is used by LetsEncrypt to issue domain validation certificates. LetsEncrypt is trusted by all major browsers at this point so it seems that the consensus is that .well-known must be kept secure at any cost. So even if you disagree with `.well-known` it must de facto be kept in the inner-most ring of your security model.
4 years ago by thaumaturgy
Right, which is why putting this file under .well-known is a small inconvenience.
It's increasingly common for server configurations to have a reverse proxy routing requests to internal containers or servers. Things like SSL renewals are often handled by the reverse proxy (because reasons [1]), so those requests don't get routed to the internal hosts by default.
Site-specific stuff, like this file, probably belongs in the site's root directory.
This is a bit bike-shedding though. It's only a small aggravation to work around.
[1]: Because you want to automate as much of the configuration as possible, so when a new hostname is added to a container or internal server, an SSL certificate just magically happens for it. This requires changes to the reverse proxy's configuration, and that's not something you want the internal containers doing, so it falls to the proxy to handle these itself. Letting the containers handle their own SSL setup means you have to have some kind of privileged communications channel from the container up to the reverse proxy, which is undesirable.
4 years ago by remram
The problem is when there starts to be many "site-specific stuff". It's easy to remember not to route .well-known to user-generated content in your app. It's less easy to remember a list of endpoints, like robots.txt, security.txt, and so on. And what when that list grows? What if you already have a user called "security.txt" (or whatever the next one is)? This is why a .well-known prefix is valuable.
4 years ago by loloquwowndueo
My point exactly - validation usually involves write permissions to put a challenge or something else as required by the protocol (ACME in your example). If I put the security.txt file there and certbot gets compromised, there goes my security policy. Putting security.txt up one level so only root (I.e. me) can update it allows me to keep .well-known writable by robots only.
4 years ago by fomine3
It's defined on RFC before Let's Encrypt. https://tools.ietf.org/html/rfc5785
4 years ago by undefined
Daily digest email
Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.