Wednesday, November 2, 2016

BSides Los Angeles - Experience and Slides

BSides LA - the only security con on the beach. If you haven’t had the opportunity to attend, you should make the effort. The volunteer staff are dedicated to making sure the conference goes well, and this year was another example of their hard work.

I enjoyed attending and learning from a number of sessions and was tickled happy to see so many presenters referencing the Verizon DBIR to give weight to their message. The corpus of data gives us so many ways to improve our industry. You should consider contributing data from your own investigations through the VERIS framework, if you don’t already.

My presentation was titled “USB Device Analysis” and I had a lot of great conversations afterwards because of it. It was great meeting new faces that are both young and old in the industry. The enthusiasm is infectious!

Many asked me for my slides, so I thought I would share them here along with some of these thoughts. Thanks to everyone for attending my talk and to the BSides organizers for having me.

One thing I talked about that isn’t in the slides is the need for user security awareness training. The studies mentioned in the slides show that from 2011 to 2016, not much has changed with the public awareness of the danger around plugging in unknown USB drives. This has been demonstrated too many times to be a easy an effective way for attackers to infiltrate a chosen target.

For those of you that are in the Incident Response role, you don’t even have a chance to get involved unless your users realize the threat.

My slides

diff-pnp-devices.ps1 on GitHub
Feel free to reach out with questions.

James Habben
@JamesHabben

Monday, October 31, 2016

Building Ubuntu Packages


Bruce Allen with the Navy Postgraduate School released hashdb 3.0 adding some great improvements for block hashing. My block hunting is mainly done on virtualized Ubuntu so I decided it was time to build a hashdb package. Figured I would document the steps as they could be used for the SANS SIFT, REMnux and many other great Ubuntu distributions too. 

1) Ubuntu 64-bit Server 16.04.1 hashdb Package Requirements

sudo apt-get install git autoconf build-essential libtool swig devscripts dh-make python-dev zlib1g-dev libssl-dev libewf-dev libbz2-dev libtool-bin

2) Download hashdb from GitHub


3) Verify hashdb Version

cat hashdb/configure.ac | more












4) Rename hashdb Folder with Version Number

mv hashdb hashdb-3.0.0

5) Enter hashdb Folder

cd hashdb-3.0.0

6) Bootstrap GitHub Download

./bootstrap.sh

7) Configure hashdb Package

./configure

8) Make hashdb Package with a Valid Email Address for the Maintainer

dh_make -s -e email@example.com --packagename hashdb –createorig

9) Build hashdb Package

debuild -us -uc
                                  
10) Install hashdb

dpkg -i hashdb_3.0.0-1_amd64.deb

Alternatively, if you just wanted to try the new version of hashdb, I have setup a limited hosted package repository at packagecloud.io for Ubuntu 64-bit Server 16.04.1.

1) Add hashdb Repository


2)  Install hashdb

sudo apt-get install hashdb

John Lukach

Tuesday, October 11, 2016

Windows Elevated Programs with Mapped Network Drives

This post is about a lesson that seems to be one that just won’t sink into my own head. I run into the issue time and time again, but I can’t seem to cement it in to prevent the issue from coming up again. I am hoping that writing this post will help you all too, but mostly this is an attempt to really nail it into my own memory. Thanks for the ride along! It involves the Windows feature called User Access Control (UAC) and mapped network drives.

Microsoft changed the behavior of security involving programs that require elevated privileges to run properly. Some of you may be already thinking about why I haven’t just disabled the whole UAC entirely, and I can understand that thought. I have done this on some of my machines, but I keep others with UAC at the default level for a couple of reasons. 1) It does provide an additional level of security for machines that interface with the internet. 2) I do development with a number of different scripts and languages and it is helpful to have a machine with default UAC to run tests against to ensure that my scripts and programs will behave as intended.

One of those programs that I use occasionally is EnCase. You can create a case and then drop-and-drop an evidence file into the window. When you try this from a network share, however, you get an error message stating that the path is not accessible. The cause of this has to do with Windows holding different login tokens open for each mode of your user session. When you click that ‘yes’ button to allow a program to run with the elevated privileges, you have essentially done a logout and login under a completely different user. That part is just automated in the background for user convenience so you don't have to actually perform the logout.

Microsoft has a solution that involves opening an elevated command prompt to use ‘net use’ to perform a drive mapping under the elevated token, but there is another way to avoid this that makes things a little more usable. It just involves a bit of registry mumbo jumbo to apply the magic.

You can see in the following non-elevated command prompt that I have a mapped drive inside of my VM machine that exposes my shared folders.


Now in this elevated command prompt, you will find the lack of a mapped drive. Again, this is a shared folder through VMware Fusion, but the same applies for any mapped drive you might encounter.



The registry path that unlocks easy mode is in the following location:
HKeyLocalMachine\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableLinkedConnections

Give that reg value a DWORD value of 0x1 and your mapped network drives will now show up in the elevated programs just the same as the non-elevated programs.



Here is the easy way to make this change. Run the following command at the command prompt:
reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v EnableLinkedConnections /t reg_dword /d 1

Then you can run the following command to confirm the addition of the reg value:
reg query HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System

Mostly I am hoping that this helps me to remember this without having to spend time consulting Aunti Google, but I also hope this might give you some help as well.

James Habben
@JamesHabben

Friday, October 7, 2016

Know Your Network

Do you know what is on your network?  Do you have a record of truth like DHCP logs for connected devices?  How do you monitor for unauthorized devices?  What happens if none of this information is currently available?

Nathan Crews @crewsnw1 and Tanner Payne @payneman at the Security Onion Conference 2016 presented on Simplifying Home Security with CHIVE that will definitely help those with Security Onion deployed answer these questions.  Well worth the watch: https://youtu.be/zBDAjNnRiQI

My objective is to create a Python script that helps with the identification of devices on the network using Nmap with limited configuration.  I want to be able to drop a virtual machine or Raspberry Pi onto a network segment that will perform the discovery scans every minute using a cron job.  Generating output that can be easily consumed by a SIEM for monitoring.


     
I use the netifaces package to determine the network address that was assigned to the device for the discovery scans.



I use the netaddr package to generate the network cidr format that the Nmap syntax uses for scanning subnet ranges.



The script will be executed from cron thus running as the root account, so important to provide absolute paths.  Nmap also needs permission to listen to network responses that is possible at this permission level too.    



I take the multi-line native Nmap output and consolidate it down to single lines.  The derived fields are defined by equals (=) for the labels and pipes (|) to separate the values.  I parse out the scan start date, scanner IP address, identified device IP address, identified device MAC address and the vendor associated with the MAC address. 



I ship the export.txt file to Loggly (https://www.loggly.com) for parsing and alerting as that allows me to focus on the analysis not the administration. 




John Lukach
@jblukach

Wednesday, August 3, 2016

GUIs are Hard - Python to the Rescue - Part 1

I consider myself an equal opportunity user of tools, but in the same respect I am also an equal opportunity critic of tools. There are both commercial and open source digital forensic and security tools that do a lot of things well, and a lot of things not so well. What makes for a good DFIR examiner is the ability to sort through the marketing fluff to learn what these tools can truly do and also figure out what they can do very well.

One of the things that I find limiting in many of the tools is the Graphical User Interface (GUI). We deal with a huge amount of data, and we sometimes analyze it in ways we couldn’t have predicted ourselves. GUI tools make a lot of tasks easy, but they can also make some of the simplest tasks seem impossible.

My recommendation? Every tool should offer output in a few formats: CSV, JSON, SQLite. Give me the ability to go primal!

Tool of the Day


I have had a number of cases lately that have started as ‘malware’ cases. Evil traffic tripped an alarm, and that means there must be malware on the disk. It shouldn’t be surprising to you, as a DFIR examiner, that this is not always the case. Sometimes there is a bonehead user browsing stupid websites.

Internet Evidence Finder (IEF) is the best tool I have available right now to parse and carve the broad variety of internet activity artifacts from the drive images. It does a pretty good job searching over the disk to find browser, web app, and website artifacts (though I don’t know exactly which versions are supported due to documentation, but I will digress in a different post).

Let me cover some of IEF’s basic storage structure that I have worked out first. The artifacts are stored in SQLite format in the folder that you designate, and it is named ‘IEFv6.db’. Every artifact type that is found creates at least one table in the DB. Because each artifact has different properties, each of the tables have a different schema. Some good things that the dev team at Magnet seem to have decided on do allow for some kind of consistency. If the column has URL data in it, then the column name has ‘URL’ in it. Similarly for dates, the column name will have ‘date’ in it.



IEF provides a search function that allows you to do some basic string searching, or you can get a bit more advanced by providing a RegEx (GREP) pattern for the search. When you kick off this search, IEF creates a new SQLite db file named ‘Search.db’ and it is stored in the same folder. You can only have one search completed at a time since kicking off a new search will cause IEF to overwrite any previous db that was created. The search db, from what i can tell anyways, seems to have an identical schema structure as the main db, only it has been filtered down on the number of records that it holds based on the keywords or patterns that you provided.

There is another feature called filter, and I will admit that I have only recently found this. It allows you to apply various criteria to the dataset, with the major one being a date range. There are other things you can filter one, but I haven’t needed to explore those just yet. When you kick off this process, you end up with yet another SQLite database filled with a reduced number of records based on the criteria and again it seems identical in schema as the main db. This one is named ‘filter.db’ and indicates that the dev team doesn't have much creativity. ;)


Problem of the Month


The major issue I have with the tool is in the way it presents data to me. The interface has a great way of digging into forensic artifacts as they are categorized and divided by the artifact type. You can dig into the nitty gritty details of each browser artifact. For the cases that I have used IEF for lately, and I suspect many of you in your Incident Response cases as well, I really don’t actually care *which* browser was the source of the traffic. I just need to know if that URL was browsed by bonehead so I can get him fired and move on. Too harsh? :)

IEF doesn’t give you the ability to have a view where all of the URLs are consolidated. You have to click, click, click down through the many artifacts, and look through tons of duplicate URLs. The problem behind this is on the design of the artifact storage in multiple tables with different schemas in a relational database. A document based database, such as MongoDB, would have provided an easier search approach, but there are trade-offs that I don’t need to tangent on here. I will just say that there is no 100% clear winner.

To perform a search over multiple tables in a SQL based DB, you have to implement it in some kind of program code because a SQL query is almost impossible to construct. SQLite makes it even more difficult with its reduced list of native functions and it’s lack of ability to create any user-defined functions or stored procedures. It just wasn’t meant for that. IEF handles this task for the search and filter process in c# code, and creates those new DB files as a sort of cache mechanism.

Solution of the Year


Alright, I am sensationalizing my own work a bit too much, but it is easy to do when it makes your work so much easier. That is the case with the python script I am showing you here. It was born out of necessity and tweaked to meet my needs for different cases. This one has saved me a lot of time, and I want to share it with you.


This python script can take an input (-i) of any 3 of the IEF database files that I mentioned above since they share schema structures. The output (-o) is another SQLite database file (I know, like you need another one in addition) in the location of your choosing.

The search (-s) parameter allow you to provide a string to filter the records on based upon that string being present in one of the URL fields of the record being transferred. I added this one because the search function of IEF doesn’t allow me to direct the keyword at a URL field. I had results from my keywords that were hitting on several other metadata fields that I had no interest in.

The limit (-l) parameter was added because of a bug I found in IEF with some of the artifacts. I think it was mainly in the carved artifacts so I really can’t fault too much, but it was causing a size and time issue for me. The bug is that the URL field for a number of records was pushing over 3 million characters long. Let me remind you that each character in ASCII is a byte, and having 3 million of those creates a URL that is 3 megabytes in size. Keep in mind that URLs are allowed to be Unicode, so go ahead and x2 that. I found that most browsers start choking if you give them a URL over 2000 characters, so I decided to cutoff the URL field at 4000 by default to give just a little wiggle room. Magnet is aware of this and will hopefully solve the issue in an upcoming version.

This python script will open the IEF DB file and work its way through each of the tables to look for any columns that have ‘URL’ in the name. If one is found, it will grab the type of the artifact and the value of the URL to create a new record in the new DB file. Some of the records in the IEF artifacts have multiple URL fields, and this will take each one of them into the new file as a simple URL value. The source column is the name of the table (artifact type) and the name of the column of where that value came from.

This post has gotten rather long, so this will be the end of part 1. In part 2, I will go through the new DB structure to explain the SQL views that are created and then walk through some of the code in Python to see how things are done.

In the meantime, you can download the ief-find-url.py Python script and take a look for yourself. You will have to supply your own IEF.

James Habben
@JamesHabben

Tuesday, June 7, 2016

Reporting: Benefits of Peer Reviews

Now that you are writing reports to get a personal and professional benefit, let’s look at some other ways that you can get benefits from these time suckers. You need the help of others on this one, since you will be giving your reports to them in seeking a review. You need this help from outside of your little bubble to ensure that you are pushing yourself adequately.


You need a minimum of 2 reviews on the reports you write. The first review is a peer review, and the other is a manager review. You can throw additional reviews on top of these if you have the time and resources available, and that is icing on the cake.

Your employer benefits from reviews for these reasons:
  • Reduced risk and liability
  • Improved quality and accuracy
  • Thorough documentation

There are more personal benefits here too:
  • Being held to a higher standard
  • Gauge on your writing improvement
  • You get noticed


Let me explain more about these benefits in the following sections.

Personal Benefits

Because the main intention of this post is to show the personal benefits and improvements, I will start here.

Higher Standards

The phrase ‘You are your own worst critic’ gets used a lot, and I do agree with it for the most part. For those of us with a desire to learn and improve, we have that internal drive to be perfect. We want to be able to bust out a certain task and nail it 110% all of the time. When we don't meet our high standards we get disappointed in ourselves and note the flaws to do better next time

Here is where I disagree with that statement just a bit. We can’t hold ourselves to a standard that we don't understand or even have knowledge about. If you don’t know proper grammar, it is very difficult for you to expect better. Similarly in DFIR, if you don’t know a technique to find or parse an artifact, you don’t know that you are missing out on it.

Having a peer examiner review your report is a great way of getting a second pair of eyes on the techniques you used and the processes you performed. They can review all of your steps and ask you questions to cover any potential gaps. In doing this, you then learn how the other examiners think and approach these scenarios, and can take pieces of that into your own thinking process.

Gauging Your Improvement

Your first few rounds of peer review will likely be rough with a lot of suggestions from your peers. Don’t get discouraged, even if the peer is not being positive or kind about the improvements. Accept the challenge, and keep copies of these reviews. As time goes on, you should find yourself with fewer corrections and suggestions. You now have a metric to gauge your improvement.

Getting Noticed

This is one of the top benefits, in my opinion. Being on a team with more experienced examiners can be intimidating and frustrating when you are trying to prove your worth. This is especially hard if you are socially awkward or shy since you won't have the personality to show off your skills.

Getting your reports reviewed by peers gives you the chance to covertly show off your skills. It’s not boasting. It’s not bragging. It’s asking for a check and suggestions on improvements. Your peers will review your cases and they will notice the effort and skill you apply, even if they don't overtly acknowledge it. This will build the respect between examiners on the team.

Having your boss as a required part of the review process ensures that they see all the work you put in. All those professional benefits I wrote about in my previous post on reporting go to /dev/null if your boss doesn't see your work output. If your boss doesn’t want to be a part of it, maybe its a sign that you should start shopping for a new boss.

Employer Benefits

You are part of a team, even if you are a solo examiner. You should have pride in your work, and pride in the work of your team. Being a part of the team means that you support other examiners in their personal goals, and you support the department and its business goals as well. Here are some reasons why your department will benefit as a whole from having a review process.

Reduced Risk and Liability

I want to hit the biggest one first. Business operations break down to assets and liabilities. Our biggest role in the eyes of our employers is to be an asset to reduce risk and liability. Employees in general introduce a lot of liability to a company and we do a lot to help in that area, but we also introduce some amount of risk ourselves in a different way.

We are trusted to be an unbiased authority when something has gone wrong, be it an internal HR issue or an attack on the infrastructure. Who are we really to be that authority? Have you personally examined every DLL in that Windows OS to know what is normal and what is bad? Not likely! We have tools (assets) that our employers invest in to reduce the risk of us missing that hidden malicious file. Have you browsed every website on the internet to determine which are malicious, inappropriate for work, a waste of time, or valid for business purposes? Again, not a chance. Our employers invest in proxy servers and filters (assets) from companies that specialize in exactly that to reduce the risk of us missing one of those URLs. Why shouldn’t your employer put a small investment in a process (asset) that brings another layer of protection against the risk of us potentially missing something because we haven't experienced that specific scenario before?

Improved Accuracy and Quality

This is a no brainer really. It is embarrassing to show a report that is full of spelling, grammar, or factual errors. Your entire management chain will be judged when people outside of that chain are reading through your reports. The best conclusions and recommendations in the world can be thrown out like yesterdays garbage if they are filled with easy to find errors. It happens though, because of the amount of time it takes to write these reports. You can become blind to some of those errors, and a fresh set of eyes can spot things much quicker and easier. Having your report reviewed gives both you and your boss that extra assurance of the reduced risk of sending out errors.

Thorough Documentation

We have another one of those ‘reducing risk’ things on this one. Having your report reviewed doesn’t give you any extra documentation in itself, but it helps to ensure that the documentation given in the report is thorough.

You are typically writing the report for the investigation because you were leading it, or at least involved in some way. Because you were involved, you know the timeline of events and the various twists and turns that you inevitably had to take. It is easy to leave out what seems like pretty minor events in your own mind, because they don’t seem to make much difference in the story. With a report review, you will get someone else’s understanding of the timeline. Even better is someone who wasn’t involved in that case at all. They can identify any holes that were left by leaving out those minor events and help you to build a more comprehensive story. It can also help to identify unnecessary pieces of the timeline that only bring in complexity by giving too much detail.

Part of the Process

Report reviews need to be a standard part of your report writing process. They benefit both you and your employer in many ways. The only reason against having your reports reviewed is the extra time required by everyone involved in that process. The time is worth it, I promise you. Everyone will benefit and grow as a team.

If you have any additional thoughts on helping others sell the benefits of report reviews, feel free to leave them in the comments. Good luck!

James Habben
@JamesHabben

Tuesday, May 31, 2016

New Page: Python Libraries

I gave a talk at Enfuse 2016 on learning Python, and the responses I got after that talk were very appreciative and encouraging for me. I believe that everyone should invest in themselves to learn some programming. I was glad to hear that many others feel the same way, but I was not so glad to hear about the struggles that many of you are having in that learning process. With this post, I am hoping to give some help in that regard.

The number one takeaway I got from attendees after that session was the mention of some of the libraries that I use, and the key benefits that I got from them. After hearing so much about that, I got to thinking about that myself and realized that I experienced the same thing when I was learning Python, and I still do as I learn new areas of Python. There are a huge number of libraries available to accomplish many tasks, and these will make the writing of our tools so much easier. The hard part is finding those libraries and deciding which of the many options are the best for our current purpose.

The goal of this new page is for John and I to put up the various libraries that we have used in the projects that we create and give some background information on why we chose it over others. The list will be growing over time as we add, and it a certain entry warrants an extra bit of info, we will write up a blog post so we don't clutter that page up too badly. The format may change as it grows to make it easier to manage and read.

Hope you find this useful and good luck on your Python adventures!

http://blog.4n6ir.com/p/python-libraries.html

James Habben
@JamesHabben