Tuesday, October 11, 2016

Windows Elevated Programs with Mapped Network Drives

This post is about a lesson that seems to be one that just won’t sink into my own head. I run into the issue time and time again, but I can’t seem to cement it in to prevent the issue from coming up again. I am hoping that writing this post will help you all too, but mostly this is an attempt to really nail it into my own memory. Thanks for the ride along! It involves the Windows feature called User Access Control (UAC) and mapped network drives.

Microsoft changed the behavior of security involving programs that require elevated privileges to run properly. Some of you may be already thinking about why I haven’t just disabled the whole UAC entirely, and I can understand that thought. I have done this on some of my machines, but I keep others with UAC at the default level for a couple of reasons. 1) It does provide an additional level of security for machines that interface with the internet. 2) I do development with a number of different scripts and languages and it is helpful to have a machine with default UAC to run tests against to ensure that my scripts and programs will behave as intended.

One of those programs that I use occasionally is EnCase. You can create a case and then drop-and-drop an evidence file into the window. When you try this from a network share, however, you get an error message stating that the path is not accessible. The cause of this has to do with Windows holding different login tokens open for each mode of your user session. When you click that ‘yes’ button to allow a program to run with the elevated privileges, you have essentially done a logout and login under a completely different user. That part is just automated in the background for user convenience so you don't have to actually perform the logout.

Microsoft has a solution that involves opening an elevated command prompt to use ‘net use’ to perform a drive mapping under the elevated token, but there is another way to avoid this that makes things a little more usable. It just involves a bit of registry mumbo jumbo to apply the magic.

You can see in the following non-elevated command prompt that I have a mapped drive inside of my VM machine that exposes my shared folders.

Now in this elevated command prompt, you will find the lack of a mapped drive. Again, this is a shared folder through VMware Fusion, but the same applies for any mapped drive you might encounter.

The registry path that unlocks easy mode is in the following location:

Give that reg value a DWORD value of 0x1 and your mapped network drives will now show up in the elevated programs just the same as the non-elevated programs.

Here is the easy way to make this change. Run the following command at the command prompt:
reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v EnableLinkedConnections /t reg_dword /d 1

Then you can run the following command to confirm the addition of the reg value:
reg query HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System

Mostly I am hoping that this helps me to remember this without having to spend time consulting Aunti Google, but I also hope this might give you some help as well.

James Habben

Friday, October 7, 2016

Know Your Network

Do you know what is on your network?  Do you have a record of truth like DHCP logs for connected devices?  How do you monitor for unauthorized devices?  What happens if none of this information is currently available?

Nathan Crews @crewsnw1 and Tanner Payne @payneman at the Security Onion Conference 2016 presented on Simplifying Home Security with CHIVE that will definitely help those with Security Onion deployed answer these questions.  Well worth the watch: https://youtu.be/zBDAjNnRiQI

My objective is to create a Python script that helps with the identification of devices on the network using Nmap with limited configuration.  I want to be able to drop a virtual machine or Raspberry Pi onto a network segment that will perform the discovery scans every minute using a cron job.  Generating output that can be easily consumed by a SIEM for monitoring.

I use the netifaces package to determine the network address that was assigned to the device for the discovery scans.

I use the netaddr package to generate the network cidr format that the Nmap syntax uses for scanning subnet ranges.

The script will be executed from cron thus running as the root account, so important to provide absolute paths.  Nmap also needs permission to listen to network responses that is possible at this permission level too.    

I take the multi-line native Nmap output and consolidate it down to single lines.  The derived fields are defined by equals (=) for the labels and pipes (|) to separate the values.  I parse out the scan start date, scanner IP address, identified device IP address, identified device MAC address and the vendor associated with the MAC address. 

I ship the export.txt file to Loggly (https://www.loggly.com) for parsing and alerting as that allows me to focus on the analysis not the administration. 

John Lukach

Wednesday, August 3, 2016

GUIs are Hard - Python to the Rescue - Part 1

I consider myself an equal opportunity user of tools, but in the same respect I am also an equal opportunity critic of tools. There are both commercial and open source digital forensic and security tools that do a lot of things well, and a lot of things not so well. What makes for a good DFIR examiner is the ability to sort through the marketing fluff to learn what these tools can truly do and also figure out what they can do very well.

One of the things that I find limiting in many of the tools is the Graphical User Interface (GUI). We deal with a huge amount of data, and we sometimes analyze it in ways we couldn’t have predicted ourselves. GUI tools make a lot of tasks easy, but they can also make some of the simplest tasks seem impossible.

My recommendation? Every tool should offer output in a few formats: CSV, JSON, SQLite. Give me the ability to go primal!

Tool of the Day

I have had a number of cases lately that have started as ‘malware’ cases. Evil traffic tripped an alarm, and that means there must be malware on the disk. It shouldn’t be surprising to you, as a DFIR examiner, that this is not always the case. Sometimes there is a bonehead user browsing stupid websites.

Internet Evidence Finder (IEF) is the best tool I have available right now to parse and carve the broad variety of internet activity artifacts from the drive images. It does a pretty good job searching over the disk to find browser, web app, and website artifacts (though I don’t know exactly which versions are supported due to documentation, but I will digress in a different post).

Let me cover some of IEF’s basic storage structure that I have worked out first. The artifacts are stored in SQLite format in the folder that you designate, and it is named ‘IEFv6.db’. Every artifact type that is found creates at least one table in the DB. Because each artifact has different properties, each of the tables have a different schema. Some good things that the dev team at Magnet seem to have decided on do allow for some kind of consistency. If the column has URL data in it, then the column name has ‘URL’ in it. Similarly for dates, the column name will have ‘date’ in it.

IEF provides a search function that allows you to do some basic string searching, or you can get a bit more advanced by providing a RegEx (GREP) pattern for the search. When you kick off this search, IEF creates a new SQLite db file named ‘Search.db’ and it is stored in the same folder. You can only have one search completed at a time since kicking off a new search will cause IEF to overwrite any previous db that was created. The search db, from what i can tell anyways, seems to have an identical schema structure as the main db, only it has been filtered down on the number of records that it holds based on the keywords or patterns that you provided.

There is another feature called filter, and I will admit that I have only recently found this. It allows you to apply various criteria to the dataset, with the major one being a date range. There are other things you can filter one, but I haven’t needed to explore those just yet. When you kick off this process, you end up with yet another SQLite database filled with a reduced number of records based on the criteria and again it seems identical in schema as the main db. This one is named ‘filter.db’ and indicates that the dev team doesn't have much creativity. ;)

Problem of the Month

The major issue I have with the tool is in the way it presents data to me. The interface has a great way of digging into forensic artifacts as they are categorized and divided by the artifact type. You can dig into the nitty gritty details of each browser artifact. For the cases that I have used IEF for lately, and I suspect many of you in your Incident Response cases as well, I really don’t actually care *which* browser was the source of the traffic. I just need to know if that URL was browsed by bonehead so I can get him fired and move on. Too harsh? :)

IEF doesn’t give you the ability to have a view where all of the URLs are consolidated. You have to click, click, click down through the many artifacts, and look through tons of duplicate URLs. The problem behind this is on the design of the artifact storage in multiple tables with different schemas in a relational database. A document based database, such as MongoDB, would have provided an easier search approach, but there are trade-offs that I don’t need to tangent on here. I will just say that there is no 100% clear winner.

To perform a search over multiple tables in a SQL based DB, you have to implement it in some kind of program code because a SQL query is almost impossible to construct. SQLite makes it even more difficult with its reduced list of native functions and it’s lack of ability to create any user-defined functions or stored procedures. It just wasn’t meant for that. IEF handles this task for the search and filter process in c# code, and creates those new DB files as a sort of cache mechanism.

Solution of the Year

Alright, I am sensationalizing my own work a bit too much, but it is easy to do when it makes your work so much easier. That is the case with the python script I am showing you here. It was born out of necessity and tweaked to meet my needs for different cases. This one has saved me a lot of time, and I want to share it with you.

This python script can take an input (-i) of any 3 of the IEF database files that I mentioned above since they share schema structures. The output (-o) is another SQLite database file (I know, like you need another one in addition) in the location of your choosing.

The search (-s) parameter allow you to provide a string to filter the records on based upon that string being present in one of the URL fields of the record being transferred. I added this one because the search function of IEF doesn’t allow me to direct the keyword at a URL field. I had results from my keywords that were hitting on several other metadata fields that I had no interest in.

The limit (-l) parameter was added because of a bug I found in IEF with some of the artifacts. I think it was mainly in the carved artifacts so I really can’t fault too much, but it was causing a size and time issue for me. The bug is that the URL field for a number of records was pushing over 3 million characters long. Let me remind you that each character in ASCII is a byte, and having 3 million of those creates a URL that is 3 megabytes in size. Keep in mind that URLs are allowed to be Unicode, so go ahead and x2 that. I found that most browsers start choking if you give them a URL over 2000 characters, so I decided to cutoff the URL field at 4000 by default to give just a little wiggle room. Magnet is aware of this and will hopefully solve the issue in an upcoming version.

This python script will open the IEF DB file and work its way through each of the tables to look for any columns that have ‘URL’ in the name. If one is found, it will grab the type of the artifact and the value of the URL to create a new record in the new DB file. Some of the records in the IEF artifacts have multiple URL fields, and this will take each one of them into the new file as a simple URL value. The source column is the name of the table (artifact type) and the name of the column of where that value came from.

This post has gotten rather long, so this will be the end of part 1. In part 2, I will go through the new DB structure to explain the SQL views that are created and then walk through some of the code in Python to see how things are done.

In the meantime, you can download the ief-find-url.py Python script and take a look for yourself. You will have to supply your own IEF.

James Habben

Tuesday, June 7, 2016

Reporting: Benefits of Peer Reviews

Now that you are writing reports to get a personal and professional benefit, let’s look at some other ways that you can get benefits from these time suckers. You need the help of others on this one, since you will be giving your reports to them in seeking a review. You need this help from outside of your little bubble to ensure that you are pushing yourself adequately.

You need a minimum of 2 reviews on the reports you write. The first review is a peer review, and the other is a manager review. You can throw additional reviews on top of these if you have the time and resources available, and that is icing on the cake.

Your employer benefits from reviews for these reasons:
  • Reduced risk and liability
  • Improved quality and accuracy
  • Thorough documentation

There are more personal benefits here too:
  • Being held to a higher standard
  • Gauge on your writing improvement
  • You get noticed

Let me explain more about these benefits in the following sections.

Personal Benefits

Because the main intention of this post is to show the personal benefits and improvements, I will start here.

Higher Standards

The phrase ‘You are your own worst critic’ gets used a lot, and I do agree with it for the most part. For those of us with a desire to learn and improve, we have that internal drive to be perfect. We want to be able to bust out a certain task and nail it 110% all of the time. When we don't meet our high standards we get disappointed in ourselves and note the flaws to do better next time

Here is where I disagree with that statement just a bit. We can’t hold ourselves to a standard that we don't understand or even have knowledge about. If you don’t know proper grammar, it is very difficult for you to expect better. Similarly in DFIR, if you don’t know a technique to find or parse an artifact, you don’t know that you are missing out on it.

Having a peer examiner review your report is a great way of getting a second pair of eyes on the techniques you used and the processes you performed. They can review all of your steps and ask you questions to cover any potential gaps. In doing this, you then learn how the other examiners think and approach these scenarios, and can take pieces of that into your own thinking process.

Gauging Your Improvement

Your first few rounds of peer review will likely be rough with a lot of suggestions from your peers. Don’t get discouraged, even if the peer is not being positive or kind about the improvements. Accept the challenge, and keep copies of these reviews. As time goes on, you should find yourself with fewer corrections and suggestions. You now have a metric to gauge your improvement.

Getting Noticed

This is one of the top benefits, in my opinion. Being on a team with more experienced examiners can be intimidating and frustrating when you are trying to prove your worth. This is especially hard if you are socially awkward or shy since you won't have the personality to show off your skills.

Getting your reports reviewed by peers gives you the chance to covertly show off your skills. It’s not boasting. It’s not bragging. It’s asking for a check and suggestions on improvements. Your peers will review your cases and they will notice the effort and skill you apply, even if they don't overtly acknowledge it. This will build the respect between examiners on the team.

Having your boss as a required part of the review process ensures that they see all the work you put in. All those professional benefits I wrote about in my previous post on reporting go to /dev/null if your boss doesn't see your work output. If your boss doesn’t want to be a part of it, maybe its a sign that you should start shopping for a new boss.

Employer Benefits

You are part of a team, even if you are a solo examiner. You should have pride in your work, and pride in the work of your team. Being a part of the team means that you support other examiners in their personal goals, and you support the department and its business goals as well. Here are some reasons why your department will benefit as a whole from having a review process.

Reduced Risk and Liability

I want to hit the biggest one first. Business operations break down to assets and liabilities. Our biggest role in the eyes of our employers is to be an asset to reduce risk and liability. Employees in general introduce a lot of liability to a company and we do a lot to help in that area, but we also introduce some amount of risk ourselves in a different way.

We are trusted to be an unbiased authority when something has gone wrong, be it an internal HR issue or an attack on the infrastructure. Who are we really to be that authority? Have you personally examined every DLL in that Windows OS to know what is normal and what is bad? Not likely! We have tools (assets) that our employers invest in to reduce the risk of us missing that hidden malicious file. Have you browsed every website on the internet to determine which are malicious, inappropriate for work, a waste of time, or valid for business purposes? Again, not a chance. Our employers invest in proxy servers and filters (assets) from companies that specialize in exactly that to reduce the risk of us missing one of those URLs. Why shouldn’t your employer put a small investment in a process (asset) that brings another layer of protection against the risk of us potentially missing something because we haven't experienced that specific scenario before?

Improved Accuracy and Quality

This is a no brainer really. It is embarrassing to show a report that is full of spelling, grammar, or factual errors. Your entire management chain will be judged when people outside of that chain are reading through your reports. The best conclusions and recommendations in the world can be thrown out like yesterdays garbage if they are filled with easy to find errors. It happens though, because of the amount of time it takes to write these reports. You can become blind to some of those errors, and a fresh set of eyes can spot things much quicker and easier. Having your report reviewed gives both you and your boss that extra assurance of the reduced risk of sending out errors.

Thorough Documentation

We have another one of those ‘reducing risk’ things on this one. Having your report reviewed doesn’t give you any extra documentation in itself, but it helps to ensure that the documentation given in the report is thorough.

You are typically writing the report for the investigation because you were leading it, or at least involved in some way. Because you were involved, you know the timeline of events and the various twists and turns that you inevitably had to take. It is easy to leave out what seems like pretty minor events in your own mind, because they don’t seem to make much difference in the story. With a report review, you will get someone else’s understanding of the timeline. Even better is someone who wasn’t involved in that case at all. They can identify any holes that were left by leaving out those minor events and help you to build a more comprehensive story. It can also help to identify unnecessary pieces of the timeline that only bring in complexity by giving too much detail.

Part of the Process

Report reviews need to be a standard part of your report writing process. They benefit both you and your employer in many ways. The only reason against having your reports reviewed is the extra time required by everyone involved in that process. The time is worth it, I promise you. Everyone will benefit and grow as a team.

If you have any additional thoughts on helping others sell the benefits of report reviews, feel free to leave them in the comments. Good luck!

James Habben

Tuesday, May 31, 2016

New Page: Python Libraries

I gave a talk at Enfuse 2016 on learning Python, and the responses I got after that talk were very appreciative and encouraging for me. I believe that everyone should invest in themselves to learn some programming. I was glad to hear that many others feel the same way, but I was not so glad to hear about the struggles that many of you are having in that learning process. With this post, I am hoping to give some help in that regard.

The number one takeaway I got from attendees after that session was the mention of some of the libraries that I use, and the key benefits that I got from them. After hearing so much about that, I got to thinking about that myself and realized that I experienced the same thing when I was learning Python, and I still do as I learn new areas of Python. There are a huge number of libraries available to accomplish many tasks, and these will make the writing of our tools so much easier. The hard part is finding those libraries and deciding which of the many options are the best for our current purpose.

The goal of this new page is for John and I to put up the various libraries that we have used in the projects that we create and give some background information on why we chose it over others. The list will be growing over time as we add, and it a certain entry warrants an extra bit of info, we will write up a blog post so we don't clutter that page up too badly. The format may change as it grows to make it easier to manage and read.

Hope you find this useful and good luck on your Python adventures!


James Habben

Wednesday, May 11, 2016

Autopsy Python Multi-User Modules

Autopsy allows examiners to collaborate on investigations using the multi-user case feature that shares database, message broker, search and storage resources. 

I wanted to write an Autopsy Module with Python to take advantage of the Multi-User Case collaboration benefits.

Also apply lessons learned from the 2015 Autopsy Module Development Contest to simplify external python library imports and create a flexible user interface.

HashDump was built as a proof of concept that requires the Hash Lookup Ingest Module be run prior to calculate the MD5 hashes.

HashDump.py builds the ingest module for the Autopsy user interface that passes the case file location as an argument to the HashDump.exe python program. 

HashDump.exe uses the case file (.AUT) that contains the information necessary for SQLite single-user database connections.  Multi-user PostgreSQL database connections also require information from the core.properties file in the examiners roaming profile.   

The examiner is presented a python generated user interface to select the hashes for export.

The python user interface closes once the database export is completed.  HashDump.py resumes control adding the HashDump.txt file in the base of the case folder to the report view.

The code is up on GitHub for use or better yet write your own Autopsy Python Multi-User Module for the 2016 Autopsy Module Development Contest at OSDFCon.

Happy Coding!!
John Lukach

Thursday, May 5, 2016

Report Rapport

Let me just state this right at the top. You need to be writing reports. I don’t care what type of investigation you are doing or what the findings are. You need to be writing reports.

There are plenty of reasons that your management will tell you about why you have to write a report. There are even more reasons for you to write these reports, for your own benefit. Here is a quick list of a few that I thought of, and I will discuss a bit about each in sections below.

  •  Documenting your findings
  •  Justification of your time
  • CYA
  •  Detail the thoroughness of your work
  • Show history of specific user or group
  • Justification for shiny tools
  • Measure personal growth

Documenting Your Findings

Your boss will share the recommendation for this because it’s a pretty solid one. You need to document what you have found. As DFIR investigators, security specialists, infosec analysts, etc., we are more technical in nature than the average computer user. We know the inner most workings of these computers, and often times how to exploit them in ways they weren’t designed. We dig through systems on an intimate level, and with this knowledge we can make some incorrect assumptions that others understand the most basic of things.

Take an example of a word document. A current generation word document has an extension of ‘docx’ when saved to disk. So many things fly through my mind when I see those letters. I know that because of the ‘x’, that it is a current generation. The current generation use the PK zip file format. It contains metadata, and in the form of XML. It has document data, and is also in the form of XML. It can have attachments, and those are always placed in a specific directory. I know you can keep going too. How many of your executives know this?

The people making decisions to investigate incidents and pay your salary do not need to know these things, but they do need to understand them in the context of your investigation. Document the details like your job depends on it. Use pictures and screen shots if you have to, since that helps display data in a friendlier way to less technical people. Go to town with it and be proud of what you discovered. The next time you have a similar case, you will have this as a reference to help spur thoughts and ensure completeness.

Justification of your time

We are a bunch of professionals that get paid very well, and we work hard for it. How many times in the last month have you thought or said to yourself that you do not have enough time in the day to complete all the work that is being placed in your queue?

When you report on your work, you are providing documentation of your work. The pile of hard drives on your desk makes it seem to others that you can’t keep up. That could mean that they are asking too much of you, or it could mean that you aren’t capable enough. You don’t want to leave that kind of question in the minds of your management. Write the reports to show the time you are spending. Show them how much work is required for a ‘quick check into this ransomware email’ and that it isn’t actually just a quick check. If you do this right, you might just find yourself with a new partner to help ease that workload.


People like to place blame on others to make sure they are clear. Your reports should document the facts how they are laid out, and let it speak for itself. You should include information about when data was requested and when it was collected. Document the state of the data and what was needed to make it usable, if that was required. Track information about your security devices and how they detected or didn’t detect pieces of the threat. You should be serving as a neutral party in the investigation to find the answers, not place the blame.

Detail the thoroughness of your work

So many investigations are opened with a broad objective, and that is to find the malware. Depending on the system and other security devices, it could be as easy as running an AV scan on the disk. Most times, in my experience at least, this is going to come up clean since it didn’t get detected in the first place anyways.

You are an expert. Show it in your reports. Give those gritty details that you love to dig into, and not just those about what you found. The findings are important, but you should also document the things you did that resulted in no findings. You spend a lot of time and some people don’t understand what’s required beyond an AV scan.

Show history of specific user or group

If you are an investigator working for a company, you are guaranteed to find those users that always get infected. They are frustrating because it causes more work for you, and they are usually some little Possibly Unwanted Program (PUP) or ransomware. They are the type of person that falls for everything, and you have probably thought or said some things about them that don’t need to be repeated.

Document your investigations, and you will be able to show that Thurston Howell III has a pattern of clicking on things he shouldn’t. Don’t target these people though. Document everything. As a proactive measure, you could start building a report summarizing your reports. Similar to the industry reports about attack trends, you can show internal trends and patterns that indicate things like a training program is needed keep users from clicking on those dang links. This can also support justification to restrict permissions for higher risk people and groups, and now you have data to back up the fact of being high risk. There can be loads of data at your disposal, and it’s limited by your imagination on how to effectively use it.

Justification for shiny tools

Have you asked for a new security tool and been turned down because it costs too much? What if you could provide facts showing that it is actually costing more to NOT have this tool?

Your reports provide documentation of facts and time. You can use these to easily show a cost analysis. Do the math on the number of investigations related to this tool, the hours involved in those investigations by everyone, not just you. You will have to put together a little extra on showing how much time the new fanciness will save, but you should have done the hard part by already writing reports.

Measure personal growth

This one is completely about you. We all grow as people, and we change the way we write and think. We do this because of our experiences, and our understanding that we can evolve to be better. Do you write like you did in 1st grade? Hope not! How about 12th grade? Unless you are a freshman in college, you have probably improved from there also.

When you write reports, you give yourself the ability to measure your growth. This can be very motivating, but it takes personal drive. If you have any reports from even just 6 months ago, go back and read them. You might even ask yourself who actually wrote that report, and I don’t think that’s a bad thing!

Final report

Reports can be a rather tedious part of our job, but if you embrace the personal benefits it can really become a fun part. Take pride in your investigation and display that in your reports. It will show. It works similar to smiling when you talk on the phone. People can tell the difference.

If you are writing reports today, good for you! Push yourself further and make it fun.

If you are not writing reports today, DO IT!

I am starting a mini series of posts on reporting. Future posts will be on structure and various sections of an investigative report. These are all my experiences and opinions, and I welcome your comments as well. Let’s all improve our reports together!

James Habben