cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
rslade
Influencer II

Making software more secure by adding bugs ...

At first, the concept of chaff bugs sounds insane.  After all, we bend considerable efforts to keep bugs out of our code and, when we find them, getting rid of them.

 

However, the idea of adding non-exploitable flaws to a program, in order to drive black hats nuts by forcing them to spend time on unprofitable research, does have an appeal to it.  It's something like the work of Cymmetria, in the deception space, or the idea of the ethics of active defence.

 

I have to admit, my appsec soul is not totally convinced of the soundness of the idea.  In general, complexity is the enemy of security, and this will definitely add complexity.  How do we determine that the bugs are non-exploitable?  If we use the same bugs, or some algorithm for generating them, will they not be identifiable for the black hats?  What will this do for bug bounties?  (About which I have my doubts as well ...)


............

Other posts: https://community.isc2.org/t5/forums/recentpostspage/user-id/1324864413

This message may or may not be governed by the terms of
http://www.noticebored.com/html/cisspforumfaq.html#Friday or
https://blogs.securiteam.com/index.php/archives/1468
7 Replies
Flyslinger2
Community Champion

If you can't scrape up enough interest to do the presentation I would appreciate reading your documentation.

 

I get the depth in defense philosophy and adding bugs to software to distract is certainly an option.   I would have to think long and hard about the merits of purposefully adding "malicious code".

CraginS
Defender I

 @rslade is incorrect in suggesting these chaff bugs might make software safer; they will not. In fact, the paper's authors are also incorrect in suggesting that chaff bugs can deter attacks; they cannot. 

 

The authors' intent is to build a software environment that wastes malicious attackers' time. I contend that implementing a chaff bug plan in a development team would be an even bigger time waste for the developers than for potential attackers.

 

For a full discussion, see my blog post, Chaff Bugs: What's the Point?

 

 

 

 

D. Cragin Shelton, DSc
Dr.Cragin@iCloud.com
My Blog
My LinkeDin Profile
My Community Posts
Thalpius
Newcomer III

I wonder how to implement bugs which are not exploitable. I think it’s a bad idea and the outcome should be to implement no bugs at all.
Flyslinger2
Community Champion

@CraginS - read your blogspot article.  You verbalized what was rattling around in my head.  Specifically that the same energy and efforts should be utilized to write good code and the Security Engineer needs to be in the project from inception.

 

The current project I'm on now I'm trying to educate the customer of the same need. They outsource some of their code and when I said that I had to apply the same assessing methodologies to that vendor's code as I do theirs they were not pleased.

denbesten
Community Champion

It is good to see developers envision potential attack avenues and build appropriate defenses.  This knowledge will only help them become better developers.  However, as is the case with anyone newly entering a field (e.g. threat analysis), it appears they have insufficient experience to foresee how their solution could go wrong.

 

What they propose sounds much like a honeypot.  The difference, of course, is that a traditional honeypot redirects the black hat to a disposable server located on an isolated network.  This prevents competition for resources in the production environment and keeps complexity out of the production environment. 

 

Historically, I have only thought of honeypots as operating at layers 3 and 4 because the typical use-case involves redirecting unused IP addresses/ports their way.  The article effectively points out that there is value to a honeypot at layer 5 and above.

 

An IPS is our traditional solution for this part of the stack.  At the first sign of exploit, the IPS will either "terminate", "blackhole" or "ignore/allow" the session. This article helped me realize that there might be value to an IPS also having the ability to "redirect to honeypot".

 

 

Baechle
Advocate I


@Thalpius wrote:
I wonder how to implement bugs which are not exploitable. I think it’s a bad idea and the outcome should be to implement no bugs at all.

I agree.  The coders already don't (have time to) spend enough effort on getting rid of actual exploits.

 

Also, by collocating a honey-sploit with an actual application, you're risking creating even more vulnerabilities. 

 

I can't see this as a good idea at all, and only mildly interesting in an academic research environment.

 

Sincerely,

 

Eric B.

Baechle
Advocate I


@denbesten wrote:

 

An IPS is our traditional solution for this part of the stack.  At the first sign of exploit, the IPS will either "terminate", "blackhole" or "ignore/allow" the session. This article helped me realize that there might be value to an IPS also having the ability to "redirect to honeypot".


Except that a true Honeypot is usually prohibitively expensive.  It has to be maintained much like the production system, even though typically it's only emulating other systems.  It still needs to be updated, patched, checked for being actually exploited in and of itself, rebooted, and fed decoy data (lest the hackers realize they're in a Honeypot).

 

Although there is some benefit to participating in the HoneyNet Project and similar endeavors in order to identify emerging exploits being used by the hackers.  

https://www2.honeynet.org/projects/

 

I don't know if this is beneficial to an organization though unless your org is a security research business.  It generally seems like a waste of resources that could be better spent on detective controls.

 

Sincerely,

 

Eric B.