SSL quebrado!

O problema é ainda ser permitido o uso do MD5. Resumo:

"Our main result is that we are in possession of a “rogue” Certification Authority (CA) certificate. This certificate will be accepted as valid and trusted by many browsers, as it appears to be based on one of the “root CA certificates” present in the so called “trust list” of the browser."

Notícia completa no blog ZeroDay:
SSL broken! Hackers create rogue CA certificate using MD5 collisions

Explicaçao detalhada:
MD5 considered harmful today - Creating a rogue CA certificate

web = a primeira avenida de ataque

Um artigo de jeremiah Grossman que defende que já é unânime que a web é a principal avenida de ataque na Internet:

It’s unanimous, Web application security has arrived

O artigo tem uma série de citações de diversas fontes a suportar a afirmação, com diversas listas das principais vulnerabilidades encontradas.

10 surpresas em segurança de software

Um artigo interessante fruto de um conjunto de entrevistas:

Software [In]security: Software Security Top 10 Surprises
By Gary McGraw, Brian Chess, Sammy Migues
fonte: InformIT

As 10 surpresas:
9. Not only are there are no magic software security metrics, bad metrics actually hurt.
8. Secure-by-default frameworks can be very helpful, especially if they are presented as middleware classes (but watch out for an over focus on security "stuff").
7. Web application firewalls are not in wide use, especially not as Web application firewalls.
6. Involving QA in software security is non-trivial... Even the "simple" black box Web testing tools are too hard to use.
5. Though software security often seems to fit an audit role rather naturally, many successful programs evangelize (and provide software security resources) rather than audit even in regulated industries.
4. Architecture analysis is just as hard as we thought, and maybe harder.
3. Security researchers, consultants and the press care way more about the who/what/how of attacks than practitioners do.
2. All nine programs we talked to have in-house training curricula, and training is considered the most important software security practice in the two most mature (by any measure) software security initiatives we interviewed.
1. Though all of the organizations we talked to do some kind of penetration testing, the role of penetration testing in all nine practices is diminishing over time.
0. Fuzz testing is widespread.

vulnerabilidades no GPS

um artigo interessante:

10 GPS Vulnerabilities
by Lieutenant Colonel Thomas K. Adams, US Army, Retired

For centuries explorers have navigated by fixed stars. Today our increasingly expeditionary military navigates by orbiting emitters. Satellites enable flexible communication and precise navigation that were unimaginable a generation ago. Space-based technologies reach down into everyday military business so much that interrupted service immediately and fundamentally degrades operations. Adams describes various threats to US satellites, systems that use their signals and a military that depends on falling stars.

SQLI + XSS + heap overflow = ?

um site vulnerável a injecção de SQL que permite cross-site scripting (baseado em armazenamento) mais uma vulnerabilidade de heap overflow no browser (Internet Explorer) dá...

SQL Injection Tangos with Heap Overflows
no blog Veracode

to sudo or not to sudo


fonte: http://xkcd.com

projectar vs consertar e a investigação em segurança

Um artigo de opinião interessante sobre o tema:

Rethinking computing insanity, practice and research
do CERIAS blog

excerto:
We have crippled our research community as a result. There are too few resources devoted to far-ranging ideas that may not have immediate results. Even if the program managers encourage vision, review panels are quick to quash it. The recent history of DARPA is one that has shifted towards immediate results from industry and away from vision, at least in computing. NSF, DOE, NIST and other agencies have also shortened their horizons, despite claims to the contrary. Recommendations for action (including the recent CSIS Commission report to the President) continue this by posing the problem as how to secure the current infrastructure rather than asking how we can build and maintain a trustable infrastructure to replace what is currently there.

vulnerabilidade no Google Gear

Uma vulnerabilidade (já corrigida) interessante pela subtileza e por fugir do top 10 de vulnerabilidades em aplicações web:

Breaking Google Gears' Cross-Origin Communication Model
do blog IBM Rational Application Security Insider

código nativo no browser?!

Inicialmente os browsers só apresentavam páginas html. Depois começaram a executar código Java (numa sandbox), JavaScript (conjunto de instruções limitadas), etc. Mas claro, o código é interpretado, logo não tão rápido como código nativo. Qual a saída? Correr código nativo no browser claro. Essa é a nova ideia da Google, que parece uma boa candidata a "ideia tola 2008". Reconhecem que making this technology safe is a considerable challenge e têm mecanismos de protecção:

"To help protect users from malware and to maintain portability, we have defined strict rules for valid modules. At a high level, these rules specify 1) that all modules meet a set of structural criteria that make it possible to reliably disassemble them into instructions and 2) that modules may not contain certain instruction sequences. This framework aims to enable our runtime to detect and prevent potentially dangerous code from running and spreading"

mas está-se mesmo a ver que é uma fonte de problemas inesgotável. Aliás, o problema até deve ser insolúvel pois consiste em olhar para um programa e determinar se é ou não seguro.

Adenda a 22/12/08: Está disponível um relatório da Google sobre o NaCl.

busca dentro de malware !?





A ideia parece-me original mas certamente que outros aparecerão a fazer o mesmo: um motor de busca para procurar termos dentro das configurações de malware. A ideia é permitir por ex, que uma instituição financeira veja se os seus sites são alvos das versões correntes de certos worms e outro malware. A versão actual procura dentro de apenas três estirpes:

SilentBanker configuration file (Q1 2008)
WSNPOEM/Zeus/PRG/Zbot configuration file (Q4 2008)
Torpig configuration file (Q2 2008)

mais informação: http://www.trusteer.com/FIsearch/open_search.php

listas de ferramentas de análise estática

duas listas com muitas:

no NIST (projecto SAMATE): https://samate.nist.gov/index.php/Source_Code_Security_Analyzers

na wikipedia: http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis

(IN)SECURE Mag


uma revista sobre segurança interessante e gratuita, em pdf:

http://www.net-security.org/insecuremag.php

defacements são importantes

bem visto: os defacements são em si pouco graves mas revelam a existência de ameaças e vulnerabilidades que não o são:

"I may be in the minority by stating the following, however, I believe that web defacements are a serious problem and are a critical barometer for estimating exploitable vulnerabilities in websites. Defacement statistics are valuable as they are one of the few incidents that are publicly facing and thus can not easily be swept under the rug.

(...)

The resulting risk of a web defacement might be low because the the impact may not be deemed a high enough severity for particular organizations. What most people are missing, however, is that the threat and vulnerability components of the equation still exist."

do blog Tactical Web Application Security (post)

vantagens da análise estática de código

uma lista muito interessante no blog http://sylvanvonstuppe.blogspot.com/ :

... Before I begin, know that I believe that there is no silver bullet to application security. Nor do I think static source code analysis is the "best" method of finding vulnerabilities. Here are some of the valid or most important reasons that static analysis should not stand alone:

* Static analysis is really best at finding semantic flaws - bad API use or failure to use certain API's, etc.
* Static analysis doesn't give compelling pretty pictures and videos of your application giving up information. The results of a static analysis are only meaningful to developers, and then, only meaningful to developers who understand the real risk of the types of findings.
* Static analysis almost always requires really expensive tools to do a really, really good job. There are grep types of analyzers, but they don't follow taint through an application.
* Static analysis may analyze components of your code that don't get used. There are still prioritization decisions to be made.
* Static analysis tools can't find logical flaws such as privilege escalation or XSRF.
* Static analysis has different requirements than black box testing:
o Developers who understand the code and can fix it
o The source code
o For many tools, the code needs to at least build (doesn't have to run)

However, there are some really, really good reasons static analysis should be a part of your security toolbelt:

* Static analysis can find vulnerabilities that dynamic analysis can't - corner cases. "This cross-site scripting flaw only exists on Tuesdays" - if your application was tested in a running state on Monday, you won't know that the flaw exists. Thread safety issues are very bad for an application, but a black box test of an application might never cause one to come up, and if it does, it's nearly impossible to reproduce, and the results don't say to the oracle that it was that type of vulnerability. (For example, the application gave you access even though you used the wrong password.)
* The results of static analysis are meaningful to developers. They get lines of code back where untrusted data enters the application, where it flows through the application, and when it exits the application. These are the exact lines that the developers need to fix, which a black box test alone can't give you.
* Since the results of a static analysis are geared toward the developers, it provides "instant training" for developers. "What does it take to make this shut up?" (While I prefer developers understand why you want it to shut up, finding all the places is pretty good, too.)
* Static analysis can happen much earlier in the development process, long before the application is functional. This gives black box testers more time to test the really cool stuff that static analysis can't find.
* Static analysis can take place as part of a build process, automatically generating problem tickets and/or preventing the promotion of code with high-probability, high-risk findings. This can be done with automated black-box tools, but it requires a running environment - many more moving parts.

Powerfuzzer - um fuzzer para sites web


Project Website
================

http://powerfuzzer.sourceforge.net


Project Description
================

Powerfuzzer is a highly automated web fuzzer based on many other Open Source
fuzzers available (incl. cfuzzer, fuzzled, fuzzer.pl, jbrofuzz,
webscarab,wapiti, Socket Fuzzer) and information gathered from numerous
security resources and websites. It is capable of spidering website and
identifying inputs.

Currently, it is capable of identifying these problems:
- Cross Site Scripting (XSS)
- Injections (SQL, LDAP, code, commands, and XPATH)
- CRLF
- HTTP 500 statuses (usually indicative of a possible
misconfiguration/security flaw incl. buffer overflow)

Designed and coded to be modular and extendable. Adding new checks should

simply entail adding new methods.

texto directamente pilhado daqui

porque é que a segurança das aplicações web é diferente

interessante:

Bulding A Web Application Security Program: Part 3, Why Web Applications Are Different
no blog Securosis

a lista das razões:
Custom code equals custom vulnerabilities
You are the vendor
Firewalls/shielding alone can’t protect web applications
Eternal Beta Cycles
Reliance on frameworks/platforms
Heritage (legacy) code
Dynamic content
New vulnerability classes

mostrar html introduzido por utilizadores

Um post sobre isso:

When you have to display html from the user
do blog Code Insecurity

O problema claro é que refletir input dos utilizadores é meio caminho andado para permitir ataques de cross site scripting (XSS). Se o input é HTML, ainda pior. Um resumo:

Step 1: Explicitly define the set of allowed tags.
Step 2: For each tag defined above, explicitly define the set of allowable attributes.
Step 3: Define a set of regexes to test the input from the user against the defined tags and attributes.
Step 4: Remove anything that does not pass the regex test. (This is the sanitization part)
Step 5: Be diligent. (Just like always)

Validação de input versus codificação

Uma discussão muito interessante sobre isso em:

Input Validation - Not That Important no blog manicode

O post começa por dizer que validar o input é menos importante do que codificá-lo:

When I bring up almost any category of web application injection attacks, most folks in the field almost instinctively begin talking about "input validation". Sure, input validation is important when it comes to detecting certain attacks, but encoding of user-driven data (either before you present that data to another user, or before you use that data to access various services) is actually a great deal more important for truly stopping almost any class of web application injection attack.

mas depois há uma interessante discussão com argumentação a favor da validação:

Encoding is the best way to protect against injection based attacks, as it is always safest to make sure the content you are handing off elsewhere is well formed and safe (...)
Input validation is the best way to protect your own app and its logic, while output encoding/sanitization is the best way to protect components you communicate with (clients, other servers, the system you are one, etc).


que redunda no post:
Output Sanitization no blog Analytical Engine

Um caso interessante é o da second-order injection que creio não ser resolvido pela codificação.

base de dados de ADN

mais um mecanismo que coloca problemas de privacidade. No Público online hoje:

Base de dados de ADN está pronta a arrancar e promete diminuir crimes por resolver

Excerto:
Foi dado o passo que faltava para a criação da base de dados portuguesa de perfis de ADN para identificação civil e criminal. O regulamento e as regras de funcionamento que faltavam para pôr em prática aquele instrumento foram publicados, anteontem, em Diário da República, pelo que o Instituto Nacional de Medicina Legal (INML) está agora apto a recolher a informação genética de todos os condenados por crimes dolosos com penas de prisão concreta igual ou superior a três anos de prisão.

Menos de 2% dos PC completamente "patched"


Um número surpreendente fornecido pela Secunia. Convém notar que o número diz respeito a não ter todo o software patched, o que é diferente de não ter o sistema operativo patched. No entanto, mesmo em relação ao sistema operativo os números não haverão de ser brilhantes pois segundo o post no blog da Secunia:

Number of insecure programs per PC/user:
0 Insecure Programs: 1.91% of PCs
1-5 Insecure Programs: 30.27% of PCs
6-10 Insecure Programs: 25.07% of PCs
11+ Insecure Programs: 45.76% of PCs

Entretanto, há quem comece a sugerir que os patches sejam obrigatórios:

I am 100 percent aware of how unpopular an idea forced updating is, but that instinctive revulsion (I cringed, too) is itself an important part of the security problem. At what point do the very real costs of fighting and destroying botnets and the loss of productivity of the individual user begin to outweigh our collective desire to completely control how and when updates are performed? For Microsoft, that question isn't an intellectual exercise, but a real concern—how do you solve a security problem that's caused by users refusing to update their machines?

Análise de código: benefícios e perigos

Um artigo interessante sobre ferramentas de análise estática e dinâmica de código fonte. O artigo defende que apesar dessas ferramentas serem muito importantes para a segurança de aplicações, não podem ser a única medida de segurança tomada e não substituem inteiramente, por exemplo, a análise manual.

‘Dumbing down’ the security profession
Shyama Rose
Zero Day

Excerto: "The usefulness of analysis tools for augmenting security reviews is undeniable. On large code bases it can reduce time investments. It provides insight into the code analysis process and can be used as a guide for reviewers. However, a negative trend is emerging where enterprises are relying solely upon automated approaches to gain insight into risk. This invokes a false sense of security as the relying party is likely unaware of the deficiencies associated with security guarantees that tools promote."

vale a pena conhecer a última vulnerabilidade?

Um artigo de opinião a dizer que não..., é melhor começar pelos problemas de segurança habituais, e mais importantes:

Breaking the zero-day habit
Mike Rothman
Zero Day

Excerto: "I think that security professionals can spend their time more effectively by NOT chasing after the latest exploit, vulnerability or other attention-grabbing issue. Very small minorities of security folks actually have adequate defenses in place right now. The majority still has a lot of blocking and tackling to complete before they should be worried about the latest and greatest exploits."