Vulnerability Analysis is just fancy QA

by Bill Sempf 12. March 2022 13:27

Test an application for vulnerabilities is just like testing an application for meeting the business requirements.  The analyst has to have access to the application, an understanding of how the application works, and a test plan in both cases in both cases. Let's take each one at a time.

First is access.  While it seems like cheating, it isn't. First, discovery of credentials takes time and social engineering (usually) which we don't have and/or is out of scope.  Second, the analyst needs to work from the assumption of an insider threat.  I usually ask for two sets of credentials for each role in the system. (Two Admin accounts and two User accounts, for example).  This makes it a lot easier to test for privilege escalation. 

Second is how an application works.  Usually the QA team on an application has been building the test plan from the start of the development, with the business analysis.  Vulnerability analysts don't have that benefit, so we have to do recon.  This is my recon list:

  • Qualys SSL Test
  • Scan network with nmap
  • Run Content Discovery in Burp
  • Index site with proxy: highlight important interactions
  • Run Content Discovery
  • Analyze uRLs for sensitive data
  • Brute force: FuzzDB directory and file scan
  • Search Google/SO/Pastebin for selected filenames in the application/names in comments
  • Take apart anything binary
  • Isolate common hidden form fields, cookies and URL Parameters

It helps out a lot.

Finally, is a test plan. Again, we haven't been building a test plan as the application is developed, but that's OK.  We don't need to.  We are almost always testing for exactly the same things. So, for this, I use the OWASP ASVS. For most applications I use Level One requirements on the ASVS, and they spreadsheet helpfully provided.

I want to rind everyone - the ASVS is open source.  It doesn't spring out of thin air.  It needs help from the community: you and me.  Check out the issues list on Github and see where you can help out.

Hmm, I did forget something - reporting.  I suppose that will be the third in this series, as there is a lot to consider there too. Sometimes you need to use the client format, sometimes the contractor, and sometimes you own. I'll share mine, and you all can go from there!

Good testing, all.  If you liked this, start with On Tools earlier.

Tags:

AppSec

On Tools

by Bill Sempf 16. February 2022 14:26

Not too long ago, I was asked to do a technical interview for a set of tests.  This isn't unheard of, but it is odd. Usually, folks have heard about me from someone and that is good enough.  In this case, however, there was a special reason.  They were trying to avoid testers that were overly reliant on tools.

That's something I can get behind.  Too often I have come in on a forensic test just to find out that the last tester just ran a report out of ZAP or Burp, and turned it over - no triage, no nothing. That is, admittedly, more or less useless. I was confident that I could explain the realities of the situation.

ZAP and Burp are "proxies" in the application sense.  They sit between the web browser and the web server and capture traffic for analysis.  The developer tools (F12 button in the web browser) do the same thing, but ZAP and Burp are designed with vulnerability analysis in mind, and as such, they have a lot of tools to help a tester out.  For instance, right now I am using a tool inside Burp to check commonly used usernames and passwords on the login screen.  There are 1,340,656 combinations.  Could I type those all in? Of course!  Can I have a week for just that test? No? OK, we'll I'll script it then.  That's a tool.

There are a lot of tools that are available for Burp and ZAP.  The one in the example above is built-in, and it is called Intruder.  Everything Intruder does the tester can do manually.  However, Intruder will save you a lot of time.  What's most important is that the tester understands what they are doing with Intruder. It's not enough to just push buttons - everything that the tester does with a ZAP or Burp toolset should be something that could be done manually and understood as to what exactly is being done.

I have an upcoming post with part of an answer to knowing exactly what is being done - a good test plan. Sneak Peek: I heartily recommend the Application Security Verification Standard from OWASP. But I have said too much already - that's for another time.

What is my toolset then, assuming I know what all of them are doing? It looks like this:

  • Most of my corporate customers expect Burp Suite history as evidence for the test, so I use Burp a lot. I'm a big fan of the addin model, and have even written a couple.  Right now, I have several addins installed from the BApp Store - which is under the Extensions tab in the main tool.
  • If I have a choice in the matter, I will often use ZAP. It has a very slick API that allows for even more automation - yes, tools for tools.  The results from the proxy are the same, of course.  It's just the output from the web server and the input from the web browser.
  • Powershell.  Yep, you heard me right. I run a Windows shop, and Powershell has a robust set of tools for testing services, handling certificates, and whatnot.  If you don't know it, highly recommended that you dig in a little.
  • Python. Like Powershell, it has a robust collection of tools for services and manipulation of requests.
  • Nikto.  This is a Perl application that tests for a boatload of known flaws in web servers and supply chain components.  Again, could I test each one manually? Of course.  Do I want to? Not that I don't want to, I just don't have time.  People like me but they won't pay me for a year for one test.

That's about it.  In my next post, "Vulnerability Analysis is just fancy QA" I'll talk a lot more about knowing what you are actually testing for, so this post and that one kinda go together.  Either way, I hope you got something from this info.

Postscript: I didn't get the job I mentioned at the top of the post. I use too many tools.

Tags:

AppSec

The Trouble With Teaching Secure Coding

by Bill Sempf 9. December 2020 00:00

Once a week or so, someone calls and asks for OWASP Top 10 testing.  I have to make the call on the spot weather or not to explain that isn't what they want, or say "Sure!" and then give them actually what they need, or have a larger scale meeting to see where their appsec maturity is, and base training on that.  Usually it is the third.

The problem is, app security is hard to teach, and frankly many shops need secure coding training, which is even harder.  Let's break down why that is the case.

OWASP Training yes, OWASP Top 10 training, no

 OWASP is a great organization.  For those unfamiliar, it is a global nonprofit with the mission of evangelizing application security to developers.  It has it's political problems sure, but in general it solves a very hard problem with grace and clarity.

One of the most famous products to come out of OWASP is the Top 10.  This list is the most risky vulnerabilities discovered by member organizations, ranked.  It is a useful.  Useful for printing out, rolling up, and smacking your CIO with until you get a security budget. 

The OWASP Top 10 is not an application security plan.  It is also not a training curriculum.  It is a marketing vehicle, and a remarkably effective one. Use it for that and you are golden.  Try and do an OWASP Top 10 training, and you are performing a disservice.

This discussion doesn't go over well with most.  Everyone wants a magic bullet for application security, but there just isn't one.

Sorry.

The plan is simply to do three things:

1) Teach the developers to recognize security flaws.

2) Teach the developers to repair the security flaws.

3) Give the developers tools to prevent security flaws from ever making it in the code.

Let's count 'em down.

When you need to learn how to test apps

 Let's be straight here.  The only way to make applications more secure is to code them securely.  Okay pokey? Good, that's settled.

Now.  There are a few things that need to happen first, and therein lies the rub.  CIOs and Dev Leads want to drop a process in place that will secure their code.  Then I stop by, put '; in their website search field, blow the whole thing up, and get the O face from the team. First, we need to show developers what the attacks are, and how to check for them.

The issue among the high level development security instructors is that they are so far along in their personal skill set that they wanna talk about indepth output encoding for style sheets, without realizing that many developers are still wondering what the other site is in Cross-Site Scripting anyway? I get it. I do.  But we gotta judge that audience, and it's rough. Average 40 person dev team you are gonna have 7-8 people that already know the basics, but not well enough to teach the other thirty-odd.  We need to start there.

Security champions - I love you all very much. Take a look at your dev teams. Close your eyes.  Take a deep breath.  Open your eyes.  Does everyone in there understand JWT risks? Does your organization remove dev names in comments? If not, you need to run an application security testing class.  No, you don't have to have everyone be an uber-hacker. But it is fun, and it does give everyone a starting point.

When you look at code in code review, ask what input validation is being done. Ask about how that viewstate is encoded.  If you get a glassy eyed stare, then consider a class on testing.

When you need to learn how to write secure code

 Once folks can recognize insecure code, it is time to start fixing things. Sounds far, far easier than it actually is. However, this is when we need to start getting the development staff into the process of building security into their everyday process.

My experience is that you need to do a few things.  First, static analysis.  It isn't perfect, but it is a start.  Static analysis is the process of analyzing the code to best determine the potential security flaws. Dynamic analysis is the act of looking at the flaws in a running application. Either can be automated - meaning a script does the work - or manual - meaning a human does the work.  Automatic static analysis, say with a tool like SonarQube, is very likely to generate a ton of false positives at the start, but the rules can be honed over time. It is an imperfect but fairly effective tool.

Another important tool that should be used is a secure coding standard.  This is a custom built document not unlike a style guide.  It is something you can hand to new devs and say "this is how we do things."  Now, this leads well into the next section, about language agnostic testing and training, because the secure coding document should be tailored to the platform used by your organization. 

Testing is language agnostic, but secure coding isn't

The issue, as one discovers writing a secure coding standard, is that testing is very platform agnostic, but writing more secure code is not.  From a tester perspective, I can say "you need to encode your outputs" but from the developer perspective, there is a different way for every language and platform.  Html.Encode()? Sanitize()? Different everywhere, and a few frameworks do the work for you.  

When the report is written, there should be remediation advice, and it should have detailed guidance.  However, that means the tester should have detailed information about the language and platform and framework used to build the tested application.  This is extremely unlikely.   

When trying to teach generally, there needs to generally be an expertise in the language, platform, and framework.  Now, some folks know several languages, platforms, and frameworks,, if they have been around a while.  I for instance know C# and ASP.NET on Windows, Java and JSP on Apache, and Python with various frameworks quite well. Others less so.  But I have been doing this a long, long time.  Teaching secure coding in Ruby on Rails requires a specialty in appsec, AND Ruby.  That's not an everyday set of skills.

So what are we gonna do?

 Whatcha gonna do?  It's not the easiest problems to solve. I have a system that I would like to share, though.

First, have someone give a security talk at your company.  Usually, I do a lunch and learn or something, obviously online these days. Go over the vaunted OWASP Top 10, or give a demo of Burp or ZAP. Heck, click F12 and see what you see. I usually invite developers, business analysts, and testers (quality assurance, whatever your term is). Some people will nap through it, some will stay after to ask questions.  Those people that stayed after might very well be your security champions.  

OK, so now we know who is interested.  Second, we do training on testing. Have the security champions help to collect together the folks they think are important to understand what the vulnerabilities are, and hold a real training - one or two days, with labs - on application security testing.  This gets the core group of people the information they need about vulnerabilities to look for.  In the labs, have them look for them.  In their own code.  Encourage folks to test their own dev instances.  Dig in.

Third, retrospective.  Get the champions back together.  What did we learn?  How can we do better?  Most important, what are the secure coding principles that must be taught?  This is where we solve the language agnostic issue.  You can't just call someone in to teach secure coding, you must learn what it means to you and your team and your company.

Fourth, write a secure coding standard. It should be based on the lessons from the retrospective.  Base it on the categories of vulnerabilities, but couched in developer terms.  I use:

  1. Security Principles
  2. Access Control
  3. Content Management
  4. Browser Interaction
  5. Exception Management
  6. Cryptography
  7. System Configuration

But your mileage may vary.  The goal is to build a guide you can give someone on their first day.  Say "We write secure code here.  This is how it is expected to be done."  Think that through.  Usually my documents are 12 pages or less.

Finally, you train the secure coding standard.  Now you know what needs to be trained.  Yes, you have to write the materials, but they can be reused.  It can be as long or as short as you like but you get everyone back together and teach.  Then, as new people join the team, you have the culture in place to hand them the document.

Next, if you want to, you start to enforce the standard with a static analysis process.  That, however, is for another post.

Tags:

AppSec

Insecure Binary Serialization: 2018 Redux

by Bill Sempf 1. December 2020 00:00

Back in 2018, I wrote about Insecure Binary Deserialization, and I'd like to give an update here for C# Advent. Originally, OWASP had just added Insecure Binary Deserialization to the OWASP Top 10, and while the problem was primarily in Java and older serialization libraries, there were still some .NET libraries that were vulnerable.  Well, no longer. Microsoft fixed the problem by deprecating the library in .NET 5.0, and here we will go over what we found in 2018, how Microsoft made the change, and what you need to do.

What is Serialization?

 At its most straightforward, serialization is the act of taking an in-memory object and encoding it as a string. There is a bit more to it than that, but it doesn't change anything in this article, so we won't get super detailed here. Encoding an in-memory object as a string does have a myriad of uses, though, from saving game state to passing objects from server to server with REST services in JSON.

Let's use game save files as an example It is possible to save a serialized file as serialized binary instead of serialized text, but this is it very difficult to send over the internet, either via service or through a web page. If the game is standalone on PC and doesn't have to report state to a server then binary format is fine - it will appear as hex in Windows and honestly be just as editable.  But, you got to do a lot more trial and error.

If the serialized object is text, then it is just a matter of decoding it, editing it, and saving it. It's pretty straightforward. Different frameworks use different methods, and sometimes different methods within the same framework use different methods.

Let's take a look at a quick example.

If I have a class like this:

public class SerializableClass
{
    public string StringProperty { get; set; }
    public int IntegerProperty { get; set; }
}

And I use BinaryFormatter.Serialize() to make a serialized object after setting some values, then save it as a file, I get this:

Pretty slick, eh?

How can Serialization be insecure?

 This file is unsigned, has no Message Authentication Code, and is not encrypted.

That's the quick answer.  The slow answer is that this file can be altered if you know what you are doing.  I covered that pretty well in my 2018 article, so I'll let it pass here and refer you there. That said, this means that any "binary" object (which looks like a bunch of gobbligook to an inexperienced developer) it editable by an attacker. For instance, I changed a word (I'll let you guess which one) and changed the data in the program on file load:

This is just one example in C# - there are a number of examples of serializers that can become a security concern, for exactly the same reason.  The thing is, there are a number of better classes that do the work properly. We just need to use them. So that leads to - what changed.

What changed?

 TLDR: They started deprecating the classes that provide insecure serialization.

The Deets:

In .NET 5.0, Microsoft has begun deprecating the classes that provide insecure serialization.  They started with the BinaryFormatter, and the pattern will gradually be used for all serialization methods. It's easy to say "Hey don't use these methods" but it is harder to follow up.  So, as part of the Long Term Obsoletion Plan, use of the BinaryFormatter will cause a warning in your build.  Eventually, over some time, that will become an error, and then in the fullness of time (h/t Don Box) the appropriate classes and methods will be removed. The goal, of course, is to simply have developers use more secure versions of the serialization classes.

So, for instance as seen at the early pre-release documentation at https://aka.ms/binaryformatter, the recommendation is to use JsonSerializer instead of BinaryFormatter.  If you use BinaryFormatter, you will get warnings today, errors tomorrow, and it will eventually break your build. Over time the other insecure methods and classes will be given the same treatment.

In ASP.NET 5.0, BinaryFormatter is disabled by default.  I recommend you do not override this, especially if you have a big project that uses serialization.  Junior programmers and contractors will use BinaryFormatter because it is the tool they know, and it can sneak into your project.  Blazor also disables BinaryFormatter by default.  This is good news, because web applications are the weakest link much of the time, storing serialized objects in cookies, headers, and hidden form fields to simplify communication. An attacker can deserialize those blobs, then steal secrets or change values. You must be on the watch, and default disable is a great way to help with that.

BinaryFormatter was the first but will hardly be the last serialization feature to be deprecated.  For instance, certain features of SecureString use serialization in an insecure way, and they are on their way out too.

What you need to do.

This is a fluid situation, so the best thing you can do is keep informed.  Subscribe to the Binary Formatter Security Guide.  As time passes, the updates will show there.  You can also get involved!  Two of the key folks involved in this are beer-drinking level friends of mine, and they assure me that the commitment to community involvement is as important if not more important to the security folks as it is to the rest of the developer community.  Write some stuff!

Upgrade to .NET 5.0 as soon as you can.  Then watch your warnings.  If your code base is clean enough, break the build on warnings.  Depending on your CICD situation, you might be able to only break on certain parts of code, which is awesome.  That way, you can take advantage of the changes they are making to the framework as it happens.

Search your codebase for BinaryFormatter and replace instances with format specific instances. For instance, use BinaryReader and BinaryWriter for XML and JSON. Also, if you are working with data objects, take a look at DataContractSerializer.  All of those classes validate inputs properly.

Avoid the SoapFormatter, LosFormatter, and ObjectStateFormatter completely.  Those use the old underlying structure, and are not secure.

Finally, consider if you need to use a serialized object at all. There are a lot of better ways to solve that problem. For instance, make an indirect object reference, save the information server side and give the client the reference ID.  That's better on all accounts.

Assume always that input is untrustworthy.  Consider your risk model and then ask yourself "if someone sent in malicious input here, what could happen?"  Dig deep.  You might be surprised what you learn about your code.

Tags:

AppSec | C#

Application Security This Week for July 19

by Bill Sempf 19. July 2020 13:40

The Enterprise Security API for Java went to 2.2.1.0

https://github.com/ESAPI/esapi-java-legacy/blob/esapi-2.2.1.0/documentation/esapi4java-core-2.2.1.0-release-notes.txt

 

Microsoft's .NET Framework is getting rid of the Binary Formatter, erasing a significant security flaw

https://github.com/dotnet/designs/pull/141

 

Good writeup on pentesting GitHub source repos - a great place to find bugs in open source packages used by your apps

https://www.errno.fr/Attacking_source_repositories

 

Portswigger's Burp Suite now includes a pre-configured browser as part of community edition - a game changer if you are doing inhouse training or CTFs

https://portswigger.net/burp/releases/professional-community-2020-7

 

Unquestionably the funniest POC for an exploit I have ever seen in my life

https://github.com/tinkersec/cve-2020-1350

 

That's the news, folks.  Hope everyone is well.

Tags:

AppSec

Winner's writeup for CodeMash CTF 2020

by Bill Sempf 15. January 2020 18:08

Austin Schertz won the CodeMash CTF this year, and he dropped off his answers to all 19 challenges.  Here they are:

 

Access Control

We got the password dump (400)

                This challenge provided a set of passwords. I recognized that they were hashes and used an online tool to look up the hash values and put them in the correct format. (cm20-XXXX-XXXX-XXXX in this case)

Binary Analysis

 Need more coffee!!! (100)

                The file had no extension, so I used an online file checker to identify it as a java.class file. From there, I renamed the file with the proper extension and ran it from the command line.

 Need even MOAR coffee!!!!! (300)

                This one was a bit more confusing. Renaming it (first to .class and then to cm.class) and running it produced a cryptic error about a missing class. Decompiling revealed some very obfuscated java code. Ultimately, someone suggested that I run the file, and look more closely at the errors. The error suggested that I needed to create another class that would be referenced by the original file. After creating a separate class, I was then informed by the errors that an interface was expected. Changed to an interface, and found that I needed to add an annotation, and then that the annotation needed to be a runtime annotation. Running it this way produced the flag. 

 One Time at Band Camp (300)

                This one provided an AmericanPie audio file. I googled ways to hide text in an audio file, and I came across a few articles about audio steganography that referenced using sonic visualizer and applying a spectrogram. I did that and found the flag around the 6 minute mark. It looked a little off, but I substituted cm20 for what looked like cy20, and it worked just fine.

I C What You Did There (400)

                I got some help on this one from an older and wiser friend of mine. I had tried several ways to look at the audio file, but he listened to the file and immediately recognized it as the sound of a Commodore 64 file. Once I knew it was a C64 file I downloaded a converter to go from WAV to TAP. I ran the tap file in an online C64 emulator.

Binary Deserialization

The button doesn't do what you want (300)

                I was super over thinking this one. I tried all kinds of JSON stuff to no avail. In the “thislooksinteresting” element, I decoded the value from Base64 and saw <GiveMeFlag> I tried lots of complicated things, but the ticket was changing the “n” to a “y”. I did it by looking up the base 64 value for “n” and replacing it with the base64 value for “y” in chrome dev tools. After that, it was as easy as pushing the button.

Encoding

All your base are belong to us! (100)

                The string was base64. Decoding produces the flag.

These soundex exactly the same! (100)

                I used the government soundex page to understand what soundex was. All three of the statements in the hint have the same soundex translation. Appending cm20 and putting dashes in the right locations produced the flag.

All your base are belong to us - level 2 (300)

                The string was base 64. Decoding it produced what appeared to be a PNG file. I copied it to a blank file and opened the image. There was the flag.

All your base are belong to us - level 3 (500)

                This one was base 64, but with a twist. A close look revealed the word “fish” at the end of the file. Removing that allowed for base 64 decoding, but the result was still base 64, and there was another instance of “fish” at the end. I wrote some C# to remove fish and decode from base 64 in a loop. Doing this 42 times produced the flag.

Encryption

Where's the bacon? (100)

                This one was a bacon cipher. I used a tool called dcode to reveal the flag.

What is missing? (200)

                I recognized another bacon cipher hiding in the bold and italics tags. I manually copied the tags in order to notepad, and fed the result to dcode to produce the flag.

Incident Response

Ghost In The Keys (400)

                I opened the file in wireshark, and saw the leftover transfer data. I looked at some articles about how to recognize keyboard data in wireshark, and how to setup custom columns. When I had gotten the data that I wanted, I dumped the results to excel and manipulated them converting the leftover data to keystrokes, noting that the 02’s are shifts, and the other data was keypresses. Ultimately this created a powershell execution with a reference to a web page in it. Accessing the web page produced the flag.

Mobile

Why did you do this to us, iOS? (200)

                I looked up how to open the file, and found that I could rename it and unzip it. After I unzipped it, I found a flag element in the plist file. It was a bunch of numbers, and I manually translated those numbers to other characters. This produced the flag.

On Site Challenges

You're gonna need a broom (1000)

The reference to the scytale was apt. I found a strip of paper attached to the wall in the game room, and a broom up against the wall. I wrapped the paper around the broom, and read off the numbers. I recognized the Hex code (no letters from late in the alphabet.) Plugging it into a hex converter, I found that the section I read off was only “cm20”. So I went back over, got the broom and read off the other sides of it and converted the hex to get the rest of the flag.

Social Engineering

Slack Challenge (100)

                Searching for cm20 in the capture the flag slack channel produced the flag.

THE BADGE CHALLENGE (300)

                I did this the hard way. . . I was not sure that I could get someone to loan me their badge to tinker with, so I went and got the source from bill’s github, and found a file that contained an array that would eventually become a bitmap. So I grabbed the array, manipulated it, loaded it to excel, and used conditional formatting to make a QR code in a spreadsheet. Scanning it with my phone produced the flag.

Web Security

Leprechaun Rally (200)

                This one was clever. I attempted to speed up the calling process to get more coins, but I got throttled. At that point I understood the hint. You need to BECOME the leprechaun with the most coins. So I set my efforts to obtaining a fraudulent session. I realized that clicking the “stay logged in” button, there was another cookie added to all the requests. It was URL encoded, and base64 encoded, but ultimately it was just “[Username]_ThisIsBadSalt”.  I created a new cookie value for the user Lucky_McPlucky, and edited my cookie in chrome dev tools. This allowed me to become the luckyiest leprechaun and retrieve the flag.

Philosopher's Stone (300)

                I spent a decent amount of time looking at the page source for this one before getting a tip that I needed to look closely at the image. I messed with the image in luna pic, and found a message in the bottom right corner of the image. Entering that led to another cryptic message. I thought it might be base 64, but discovered eventually that it was chess notation. After significant manipulation to the string, I entered it to lichess.org. This loaded the match, but I am no good at chess. It took running it as an AI to realize that white’s next move was a checkmate. I spent a while trying permutations of that move in chess notation in the solution bar. I found one that worked! But then I found another challenge was waiting for me. It looked like a flag, but it wasn’t. I looked at the page source and found a bunch of hidden whitespace characters in the middle of the flag string. Removing them didn’t work, so I thought maybe the whitespace was the flag? I pulled out the whitespace pattern and realized that it was morse code. The decoded morse was added to cm20{XXXXXX} to get the flag.

Tags:

AppSec

Application Security This Week for October 6

by Bill Sempf 6. October 2019 12:40

This is a blog entirely dedicated to security analysis of mobine apps.  No idea who writes it but it is good.

https://theappanalyst.com/

 

Neat writeup on going from SQL Injection to Remote Code Execution.

https://medium.com/bugbountywriteup/sql-injection-to-lfi-to-rce-536bed29a862

 

I've been on a PHP project recently, and I learned about this cool tool to bypass disable_functions.

https://github.com/mm0r1/exploits/tree/master/php7-gc-bypass

 

Speaking of PHP, the statis code analysis tool I learned to use was Exakat.  Steep learning curve but unbelievable reports.  And open source!

https://github.com/exakat/exakat

 

That's the news, folks.

 

Tags:

AppSec

Application Security This week for June 30

by Bill Sempf 30. June 2019 09:46

Fascinating look into Internet routing that caused an outage last week.  We are really building this city on a bed of sticks.

https://blog.cloudflare.com/how-verizon-and-a-bgp-optimizer-knocked-large-parts-of-the-internet-offline-today/

 

Not my normal fare for this newsletter, but Microsoft added a secure vault to OneDrive.  Not in the US yes, but my Australian friends can give it a try.

https://www.windowscentral.com/microsoft-announces-onedrive-personal-vault-secure-area-within-your-onedrive

 

There is a directory traversal vulnerability in ... this blog!  Please don't hack my.  I'll update later today.

https://seclists.org/fulldisclosure/2019/Jun/44

 

MongoDB is adding field level encryption.  Now if folks would just use the authentication features ...

https://www.wired.com/story/field-level-encryption-databases-mongobd/

 

Found a VERY cool tool that lists known vulnerabilities in default containers.

https://vulnerablecontainers.org/

 

A weird enge case forces the npm deployment script to push the .git folder.  Remember, complexity is the enemy of security.

https://npm.community/t/npm-6-9-1-is-broken-due-to-git-folder-in-published-tarball/8454/2

 

And that's the news folks.

Tags:

AppSec

Application Security This Week for March 3

by Bill Sempf 3. March 2019 15:02

A new tool for finding malicious JavaScript and securely using external libraries.

https://blog.focal-point.com/a-new-tool-for-finding-malicious-javascript-and-securely-using-external-libraries

 

Acunetix has it's annual report out.  Gotta give them your dox though, sorry.

https://www.acunetix.com/acunetix-web-application-vulnerability-report/?utm_source=hacktools&utm_campaign=security&utm_medium=content

 

Portswigger has their annual report out too.  You do NOT need to give them your dox.  Just sayin.

https://portswigger.net/blog/top-10-web-hacking-techniques-of-2018

 

Really cool video that shows the non-FUD dangers of digital exploitation, without using a single website, computer, or black hoodie.

https://www.grahamcluley.com/cybersecurity-video-no-computers/

 

New Google Translate exploit. Funny, because I used Google Translate as a counter-example in my REST security talk.

https://github.com/ljmf00/google-translate-exploit

 

Universal RCE with Ruby YAML.load()

https://staaldraad.github.io/post/2019-03-02-universal-rce-ruby-yaml-load/

 

And that's the news!

Tags:

AppSec

Application Security This Week for February 24

by Bill Sempf 24. February 2019 10:39

Cool PoC of the Mac vulnerability CVE-2018-4193, an RCE in WindowServer.

https://www.synacktiv.com/ressources/OffensiveCon_2019_macOS_how_to_gain_root_with_CVE-2018-4193_in_10s.pdf

 

Terrifying vulnerability in an underlying component of Docker, Kubernates, and other virtuilazation software leads to hypervisor breakdown.

https://github.com/lxc/lxc/commit/6400238d08cdf1ca20d49bafb85f4e224348bf9d

 

An Oracle DCMA takedown of a Docker container leads to some interesting build awareness. Good Reddit thread.

https://www.reddit.com/r/oracle/comments/arqhjc/our_builds_are_failing_because_oracle_has_dmca/

 

A fourteen year old flaw was discovered in the encryption facility of WinRAR.  Whoops.  So much for the thousand eyes on open source theory.

https://arstechnica.com/information-technology/2019/02/nasty-code-execution-bug-in-winrar-threatened-millions-of-users-for-14-years/

 

Microsoft turbocharges GitHub's bug bounty program.

https://www.zdnet.com/article/github-bug-bounty-microsoft-ramps-up-payouts-to-30000-plus/

 

And that's the news!

Tags:

AppSec

Husband. Father. Pentester. Secure software composer. Brewer. Lockpicker. Ninja. Insurrectionist. Lumberjack. All words that have been used to describe me recently. I help people write more secure software.

Find me on Mastodon

profile for Bill Sempf on Stack Exchange, a network of free, community-driven Q&A sites

MonthList

Mastodon