In recent ten years, I was involved in the disclosure of multiple vulnerabilities to different organizations and each story is unique and diverse as there is no standard way of doing it. I am not a security researcher and did not find those vulnerabilities on my own, but I was there. A responsible researcher, subjective to your definition of what is responsible, discloses first the vulnerability to the developer of the product via email or a bug bounty web page. The idea is to notify the vendor as soon as possible so they can have time to study the vulnerability, understand its impact, create a fix and publish an update so customers can have a solution before weaponization starts. Once the vendor disclosure is over, you want to notify the public about the existence of the vulnerability for situational awareness. Some researchers wait a specified period before exposure, some never disclose it to the public, and some do not wait at all. There is also variance in the level of detailing in the disclosure to the public where some researchers only hint on the location of the vulnerability with mitigation tips vs. those who publish a full proof of concept code which demonstrates how to exploit the vulnerability. I am writing this to share some thoughts about the process with considerations and pitfalls that may take place.
A Bug Was Found
It all starts with the particular moment where you find a bug in a specific product, a bug that can be abused by a malicious actor to manipulate the product into doing something un-intended and usually beneficial to the attacker. Whether you searched for the bug days and nights under a coherent thesis or just encountered it accidentally, it is a special moment. Once the excitement settles the first thing to do is to check on the internet and in some specialized databases whether the bug is already known in some form. In the case it is unknown then you are entering a singular phase in time where you may be the only one on earth which knows about this vulnerability. I say maybe as either the vendor already knows about it but has not released a fix for it yet for some reason or an attacker known about it and is already abusing it in ongoing stealth attacks. It could also be that there is another researcher in the world who seats on this hot potato contemplating what to do with it. The found vulnerability could have existed for many years and can also be known to select few; this is a potential reality you can not eliminate. The clock started ticking loudly. In a way, you discovered the secret sauce of a potential cyber weapon with an unknown impact as the vulnerabilities are just a means to an end for attackers.
Disclosing to the Vendor
You can and should communicate it to the vendor immediately where most of the software/hardware vendors publish the means for disclosure. Unfortunately sending it quickly to the vendor does not reduce the uncertainty in the process, it adds more to it. For instance, you can have silence on the other line and not get any reply from the vendor who can put you into a strange limbo state. Another outcome could be getting an angry reply with how dare you to look into the guts of their product searching for bugs and that you are driven only by publicity lust, a response potentially accompanied by a legal letter. You could also get a warning not to publish your work to the public at any point in time as it can cause damage to the vendor. These responses do take place in reality and are not fictional, so you should have these options in mind. The best result of the first email to the vendor is a fast reply, acknowledging the discovery, maybe promising a bounty but most important cooperating sensibly with your public safety disclosure goal.
There are those researchers who do not hold their breath for helping the vendor and immediately go to public channels with their findings assuming the vendor hears about it eventually and will react to it. This approach most of the time sacrifices users? safety in the short term on behalf of a stronger pressure on the vendor to respond. A plan not for the faint of heart.
In the constructive scenarios of disclosure to the vendor, there is usually a process of communicating back and forth with the technical team behind the product, exchanging details on the vulnerability, sharing the proof of concept so the vendor can reproduce it fast, and supporting the effort to create a fix quickly. Keep in mind that even if a fix is created it does not mean it fits the company?s plans to roll it out immediately due to whatever reason, and this is where your decision on how to time the public disclosure comes into play. The vendor, on the one hand, wants the timeline to be adjusted to their convenience while your interest is to make sure a fix and the public awareness of the problem is available to users as soon as possible. Sometimes aligned interests but sometimes conflicted. Google Project Zero made the 90 days deadline a famous and reasonable period from the vendor to public disclosure but it is not written in stone as each vulnerability reveals different dynamics concerning fix rollout and it should be thought carefully.
Public Disclosure
Communicating the vulnerability to the public should have a broad impact to reach the awareness of the users, and it usually takes one of two possible paths. The easiest one is to publish a blog post and sharing it on some cybersecurity experts forums, and if the story is interesting it will pick up very fast as the information dissemination in the world of infosec is working quite right ? the traditional media will pick it up from this initial buzz. It is the easiest way but not necessarily the one which you have the most control over its consequences as the interpretations and opinions along the way can vary greatly. The second path is to connect directly with a journalist from a responsible media outlet with shared interest areas and to build a story together where they can take the time to ask for comments from the vendor and other related parties and develop the story correctly. In both cases, the vulnerability uncovered should have a broad audience impact to reach publicity. Handling the public disclosure comes with quite a bit of stress for the inexperienced as once the story starts rolling publicly you are not in control anymore and the only thing left to you or my the best advice is to stay true to your story, know your details and be responsive.
I suggest letting the vendor know about your public disclosure intentions from the get-go so there won?t be surprises and hopefully they will cooperate with it even though there is a risk they will have enough time to downplay or mitigate this disclosure if they are not open to the publicity step.
One of the main questions that arise when contemplating public disclosure is whether to publish the code of the proof of concept or not. It has pros and cons. In my eyes more cons than pros. In general, once you publish your research finding of the mere existence of the vulnerability you covered the goal of awareness and combined with the public pressure it may create on the vendor, then you may have shortened the time for a fix to be built. The published code may create more pressure on the vendor, but the addition is marginal. Bear in mind that once you publish a POC, you shortened the time for attackers to weaponizing their attacks with the new arsenal during the most sensitive time where the new fix does not protect most of the users. I am not suggesting that attackers are in pressing need of your POC for abusing the new vulnerability ? the CVE entry which pinpoints the vulnerability is enough for them to build an attack. I am arguing that by definition, you did not make their life harder while giving them an example code. Making their life harder and buying more time for users of the vulnerable technology is all about safety which is the original goal of the disclosure anyhow. The reason to be in favor of publishing a POC is the contribution to the security research industry where researchers can have another tool in their arsenal in the search for other vulnerabilities. Still, once you share something like that in public you, cannot control who gets this knowledge and who does not and you should assume both attackers and defenders will. There are people in the industry that strongly oppose POC publishing due to the cons I mentioned, but I think they are taking a too harsh stance. It is a fact that the mere CVE publication causes a spike of new attacks abusing the new vulnerability even in the cases where a POC is not available in the CVE, so it does not seem to be the main contributor to that phenomena. I am not in favor of publishing a POC though I think about that carefully on a case by case basis.
One of the side benefits of publishing a vulnerability is recognition in the respective industry, and this motivation goes alongside the goal of increasing safety. The same applies to possible monetary compensation. These two ?nonprofessional? motivations can sometimes cause misjudgment for the person disclosing the vulnerability, especially when navigating in the harsh waters of publicity. Many times it creates public criticism on the researchers due to these motivations. I believe independent security researchers are more than entitled to these compensations as they put their time and energy into fixing broken technologies that they do not own with good intentions, so the extra drivers eventually increase safety for all of us.
On Patching
The main perceived milestone during a vulnerability disclosure journey is the introduction of a new version by the vendor that fixes the vulnerability. The real freedom to disclose everything about vulnerability is when users? are protected with that new fix, and in reality, there is a considerable gap between the time of the introduction of a new patch until the time systems have that fix applied. In enterprises, unless it is a critical patch with a massive impact, it may take 6-18 months until patches are applied to systems. On many categories of IoT devices, no patching takes place at all, and on consumer products such as laptops and phones the pace of patching can be fast but is also cumbersome and tedious and due to that many people just shut it off. The architecture of software patches which many times also include new features mixed with security fixes is outdated, flawed, and not optimized for the volatility in the cybersecurity world. So please bear in mind that even if a patch exists, it does not mean people and systems are safe and protected by it.
The world has changed a lot in recent seven years regarding how vulnerability disclosure works. More and more companies come to appreciate the work by external security researchers, and there is more openness on a collaborative effort to make products safer. There is still a long way to go to achieve agility and more safety, but we are definitely in the right direction.