Featured

printf (“hello, world\n”)

Welcome.  After reading other digital forensic blogs over the past couple of years I decided to start my own.  I have gained a lot by reading research done by others, so I thought it would only be right to give back to the digital forensic community.

I work in the government sector, so I will be limiting identifying information about me or my organization.  If you’re able to figure out who I am that’s fine.  Just know that I will actively try to not put anything identifying in here, although that may not be possible at times.  The posts will focus on the forensics.

A bit about me:  I manage a digital forensics lab in an ISO 17025 environment.  In addition to all of the managerial duties, I am also responsible for technical operations of my group and carry a full caseload.  If I am expected to lead, I also better do.  🙂

My vision for this blog will start small.  I hold a full-time job, have a family, and am in the middle of a master’s program.  I am aiming for one post a month, but there may be more or less depending on life.

Again, welcome, and thank you for taking time to read the posts.  I welcome feedback, regardless if it is positive or negative.  If you see something that is inaccurate please let me know so I can correct whatever it is that you see.  I do not want to proliferate bad information to the DF community.

Wickr. Alright. We’ll Call It A Draw.

WickrPrompt
Ugh.  Not again.

Portions of this blog post appeared in the 6th issue of the INTERPOL Digital 4n6 Pulse newsletter. 

I would like to thank Heather Mahalik and Or Begam, both of Cellebrite, who helped make the Android database portion of this blog post possible, and Mike Williamson of Magnet Forensics for all the help with the underpinnings of the iOS version.

I have been seeing the above pop-up window lately.  A  lot.  Not to pick on any particular tool vendor, but seeing this window (or one simmilar to it) brings a small bit of misery.  Its existence means there is a high probability there is data on a particular mobile device that I am not going to get access to, and this is usually after I have spent a considerable amount of time trying to gain access to the device itself.  Frustrating.

One of my mentors from my time investigating homicides told me early in my career that I was not doing it right unless I had some unsolved homicides on the books; he felt it was some sort of badge of honor and showed a dedication to the craft.  I think there should be a similar mantra for digital forensic examiners.  If you conduct digital forensic examinations for any substantial amount of time you are going to have examinations where there is inaccesible data and nothing you do is going to change that.  You can throw every tool known to civilization at it, try to manually examine it, phone a friend, look on Twitter, search the blog-o-sphere, search the Discord Channel, query a listserv, and conduct your own research and you still strike out.  This is a reality in our discipline.

Not being able to access such data is no judgment of your abilities, it just means you may not win this round.  Keep in mind there is a larger fight, and how you react to this setback is a reflection of your dedication to our craft.  Do you let inaccessible data defeat you and you give up and quit, or do you carry on with that examination, getting what you can, and apply that same tenacity to future examinations?

One needs the latter mindset when it comes to Wickr.  For those that are unfamilar, Wickr is a company that makes a privacy-focused, ephemeral messaging application. Initially available as an iOS-only app, Wickr expanded their app to include Android, Windows, macOS, and Linux, and branched out from personal messaging (Wickr Me) to small teams and businesses (Wickr Pro – similar to Slack), and an enterprise platform (Wickr Enterprise).  Wickr first hit the app market in 2012, and has been quietly hanging around since then.  Personally, I am surpised it is not as popular as Signal,  but I think not having Edward Snowden’s endorsement and initially being secretive about its protocal may have hurt Wickr’s uptake a bit.

Regardless, this app can bring the pain to examinations.

Before I get started, a few things to note.  First, this post encompasses Android, iOS, macOS, and Windows.  Because of some time constraints I did not have time to test this in Linux.  Second, the devices and respective operating system versions/hardware are as follows:

Platform                    Version                 Device                                                   Wickr Version


Android                      9.0                         Pixel 3                                                    5.22


iOS                               12.4                       iPhone XS and iPad Pro 10.5            5.22


macOS                        10.14.6                  Mac Mini (2018)                                  5.28


Windows 10 Pro       1903                      Surface Pro                                           5.28

Third, Wickr Me contains the same encryption scheme and basic functionality as the Pro and Enterprise versions: encrypted messaging, encrypted file transfer, burn-on-read messages, audio calling, and secure “shredding” of data. The user interface of Wickr Me is similar to the other versions, so the following sections will discuss the functionality of Wickr while using the personal version.

Finally, how do you get the data?  Logical extractions, at a minimum, should grab desktop platform Wickr data during an extraction.  For Android devices, the data resides in the /data/data area of the file system, so if your tool can get to this area, you should be able to get Wickr data.  For iOS devices, you will need a jailbroken phone or an extraction tool such as one that is metal, gray, and can unlock a door to get the Wickr database.  I can confirm that a backup nor a logical extraction contains the iOS Wickr database.

Visual Walkaround

Wickr is available on Android, iOS, macOS, and Windows, and while these platforms are different, the Wickr user interface (UI) is relatively the same across these platforms. Figure 1 shows the Windows UI, Figure 2 shows the macOS UI, Figure 3 shows the iPhone, and Figure 4 shows the iPad. The security posture of the Wickr app on Android prevents screenshots from being taken on the device, so no UI figure is available.  Just know that it looks very similar to Figure 3.

Figure 1
Figure 1.  Wickr in Windows.
Figure 2.png
Figure 2.  Wickr in macOS.

 

Figure 3.png
Figure 3.  Wickr on iPhone.
Figure 4
Figure 4.  Wickr on iPad.

Each figure has certain features highlighted. In each figure the red box shows the icons for setting the expiration timer and burn on read (setting that allows the sender of a message to set a self-destruction timer on a message before it is sent – the recipient has no control over this feature), the blue arrow shows the area where a user composes a message, the orange arrow shows the area where conversations are listed, and the purple arrow shows the contents of the highlighted conversation (chosen in the conversations list).  Not highlighted is the phone icon seen in upper right corner of each figure. This initiates an audio call with the conversation participant(s).

The plus sign seen in the screen (red boxes) reveals a menu that has additional options: send a file (including media files), share a user’s location, or use one of the installed quick responses. Visually, the menu will look slightly different per platform, but the functionality is the same.  See Figure 5.

Figure 5
Figure 5.  Additional activity options (Windows UI).

The sending and receiving of messages and files works as other messaging applications with similar capabilities. Figure 6 shows an active conversation within the macOS UI.

Figure 6.png
Figure 6.  A conversation with a text message and picture attachments (macOS UI).

Wickr is similar to Snapchat in that messages “expire” after a set period of time. The default time a message is active is six (6) days, which is the maximum amount of time a message can be available, but a user can set message retention times as short as one second. This setting is device specific; if a user has multiple devices they can choose different retention periods for each device.

Users can also set “burn-on-read” times in which a message will expire (“burn”) after a certain period of time after the message has been read. This setting is controlled by the message sender, regardless of the recipient’s message retention period setting.  The retention period for burn-on-read messages can also be set anywhere between 1 second and 6 days. Figure 7 shows the Windows Wickr UI when a burn-on-read message has been received and opened (bottom of the active conversation window pane), and Figure 8 shows the same UI after the burn-on-read retention period expired.

Figure 7
Figure 7.  A burn-on-read message (timer in red).
Figure 8
Figure 8.  Poof! The message has been burned.

The Secure Shredder function is Wickr’s feature by which data that has been deleted by the app is rendered unrecoverable by overwriting the deleted data.  Secure Shredder is an automated feature that runs in the background but has a manual configuration feature if a user is part of the Wickr Pro Silver or Gold tiers, which allows users to manually initiate the function.  Testing showed this feature automatically runs every +/- one (1) minute while the device is idle.

Encryption.  All Of The Encryptions.

Wickr is designed with total privacy in mind, so all three versions use the same encryption model. The app not only protects messages, media, and files in transit, but it also protects data at rest. The app has been designed with perfect forward secrecy; if a user’s device is compromised, historical communications are still protected unless the attacker has the user’s password and the messages have not expired.

When a new message is received, it arrives in a “locked” state.  See Figure 9.

Figure 9.png
Figure 9.  A new message.

When a message is sent, the sender’s device will encrypt the message using a symmetric key. To generate the symmetric key, internal APIs gather random numbers which are then run through the AES-256 cryptographic algorithm in Galois/Counter Mode (GCM). Each message is encrypted using a new symmetric key, and this operation occurs strictly on the sender’s device. This encryption happens regardless of whether the message contains text, a file, or a combination of the two. The cipher text and the symmetric key (i.e. the package) are encrypted using the signed public key of the recipient’s device (I’ll discuss asymmetric operations in a minute), and then sent to the recipient who then decrypts the package using their private key. The symmetric key is then applied to the cipher text in order to decrypt it.

The takeaway here is that unlocking a received message = decrypting a received message.  A user may set their device to automatically unlock messages, but the default behavior is to leaved them locked on receipt and manually initiate the unlock.

Asymmetric operations are applied to messages in transit.  As previously mentioned, cipher text and the symmetric key used encrypt it are packaged up and encrypted using the public key of the intended recipient’s device.  The public key is signed with components from said device.  The recipient device uses the corresponding private key to decrypt the package, and then the symmetric key is used to decrypt the cipher text  (unlocking the message) so the recipient can read it. If a message is intended for multiple recipients or for a recipient who has multiple devices, a different set of keys is used for each destination device.

Here is where the pain starts to come. The keys used in the asymmetric operations are ephemeral; a different set of public/private key pairs are used each time a message is exchanged between devices. Wickr states in its technical paper that pools of components (not the actual keys themselves) of private-public pairs are created and refreshed by a user’s device while they are connected to Wickr’s servers.  If a device is disconnected from the Wickr servers, it will use what key pairs it has, and will then refresh its pool once it has re-established the connection.

Even if a private key is compromised, the only message that can be decrypted is the one that corresponds to that specific private/public key pair; the rest of the messages are still safe since they use different pairs.

But wait, it gets worse.  Just to turn the knife a bit more, Wickr has a different encryption scheme for on-device storage that is separate from message transfers.  When Wickr is first installed on a device a Node Storage Root Key (Knsr) is generated. The Knsr is then applied to certain device data (described as “device specific data and/or identifiers derived from installed hardware or operating system resources that are unique, constant across application installs but not necessary secret“) to generate the Local Device Storage Key (Klds). The Klds is used to encrypt Wickr data stored locally on the device, including files necessary for Wickr to operate.

The Klds is itself encrypted using a key derived from the user’s password being passed through scrypt. When a user successfully logs in to the Wickr app, the Klds is decrypted and placed into the device’s memory, allowing for successful exposure of the locally stored Wickr data through the app UI. When the app is terminated, placed in an inactive state, or a user logs out, the Klds is removed from memory, and the Wickr data is no longer available.

For those who have legal processes at their disposal (court orders, search warrants, & subpoenas), the news is equally dire.  Wickr does keep undelivered messages on their servers for up to six (6) days, but, as I previously mentioned, the messages (which are in transit) are encrypted.  Wickr states they do not have acess to any keys that would decrypt what messages are stored.  There is some generic account and device information, but no message content.  For more information on what little they do have, please read their legal process guide.

So, Is There Anything I Can Actually Get?

The answer to this question is the standard digital forensics answer:  “It depends.” The encryption scheme combined with the on-device security measures makes it extremely difficult to recover any useful data from either the app or Wickr, but there is some data that can be retrieved, the value of which depends on the goal of the examination.

Testing has shown a manual examination is the only way, as of the time of this post, to recover message content from iOS, macOS, and Windows (files not included). This requires unfettered access to the device along with the user’s Wickr password. Due to a change in its encryption scheme (when this happened is unknown), Wickr is not supported by any tool I tested on any platform, which included the current versions of Cellebrite, Axiom, and XRY. This included the Android virtualization options offered by two mobile vendors.  Along those same lines, I also tried Alexis Brignoni’s virtualization walkthrough using Nox Player, Virtual Box, and Genymotion, with no luck on all three platforms.

Things can be slightly different for those of you who have Wickr deployed in an enterprise environment.  The enterprise flavor of Wickr does have compliance (think FOIA requests and statutory/regulatory requirements) and eDiscovery features, which means message content may be retained so as long as the feature is enabled (I didn’t have access to this version so I could not ascertain if this was the case).  Just be aware that if the environment includes Wickr, this may be an option for you.

The type and amount of data an examiner can possibly get is dependant upon which platform is being examined. The nice thing is there is some consistency, so this can help examiners determine, rather quickly, if there is anything to be recovered.  The consistencey can be broken up in to two categories:  iOS and Android/macOS/Windows. One thing is consistent across ALL platforms, though: an examiner should not expect to find any message content beyond six (6) days from the time of examination.

Android/macOS/Windows

The most important thing to remember for Android, macOS, and Windows platforms is that order matters when doing a manual examination. That is, the order in which you examine the device for Wickr content is important. Failure to keep this mind may result in recoverable data being deleted unnecessarily.  Android can be slightly different, which I will discuss shortly.

I will go ahead and get one thing out of the way with all three platforms: the databases containing account information, conversation information, contacts, and message content are all encrypted.  The  database is protected with SQL Cipher 3, and the Wickr user password is not the password to the database (I tried to apply the correct Wickr password in DB Browser for SQLite, and none would open – you’ll see why below). Figure 10 shows the macOS database in hexadecimal view and Figure 11 shows the Windows database.  While not shown here, just know the Android database looks the same.

Figure 10.png
Figure 10.  Wickr database in macOS
Figure 11.PNG
Figure 11.  Wickr database in Windows.

You may have noticed the file name for both macOS and Windows is the same:  wickr_db.sqlite.  The similarities do not stop there.  Figure 12 shows the location of the database in macOS, Figure 13 shows the database’s location in Windows.

Figure 12
Figure 12.  Home in macOS.  ~/Users/%UserName%/Library/ApplicationSupport/Wickr, LLC/WickrMe/
Figure 13
Figure 13.  Home in Windows.  C:\Users\%UserName%\AppData\Local\Wickr, LLC\WickrMe\

As you can see, most of the file names in each home directory are the same.  Note that the last folder in the path, “WickrMe,” may be different depending on what version is installed on the device (Wickr Me, Wickr Pro, Enterprise), so just know the last hop in the path may not be exactly the same.

Interesting note about the “preferences” file in Windows:  it is not encrypted.  It can be opened, and doing so reveals quite a bit of octet data.  The field “auid” caught my attention, and while I have a theory about its value, I’ll save it for another blog post.

For Android, the directory and layout should look familar to those who examine Android devices.  The database file, wickr_db, sits in the databases folder.  See Figure 14.

Figure 14
Figure 14.  Home in Android.  /data/data/com.mywickr.wickr2

If you will recall, when a user unlocks a message it is actually decrypting it.  This also applies to files that are sent through Wickr.  Unlike messages, which are stored within the database, files, both encrypted and decrypted, reside in the Wickr portion of the file system.  When a message with a file unlocked, an encrypted version is created within the Wickr portion of the file system.  When the file is opened (not just unlocked), it is decrypted, and a decrypted version of that file is created within a different path within the Wickr portion of the file system.  Figure 15 shows the Android files, Figure 16 shows the macOS files, and Figure 17 shows the Windows files.  The top portion of each figure shows the files in encrypted format and the bottom portion of the figure shows the files in decrypted format.

Figure 15.png
Figure 15.  Encrypted/decrypted files in Android.
Figure 16
Figure 16.  Encrypted/decrypted files in macOS.
Figure 17
Figure 17.  Encrypted/decrypted files in Windows.

When a file is sent through Wickr it is given a GUID, and that GUID is consistent across devices for both the sender and the recipient(s).  In the figures above, Android represents Test Account 1 and macOS/Windows represents Test Account 2, so you will notice that the same GUIDs are seen on both accounts (all three platforms).

The fact an encrypted version of a file exists indicates a device received the file and the message was unlocked, but doesn’t necessarily indicate the file was opened.  It isn’t until the user chooses to open the file within the Wickr UI that a decrypted version is deposited onto the device as seen above.  An example of the open dialogue is seen in Figure 18.  The triple dots in the upper righthand corner of the message bubble invokes the menu.

Figure 18.png
Figure 18.  Open dialogue example (Windows UI).

If a picture is received and the message is unlocked, then a thumbnail is rendered within the Wickr UI message screen, as seen in Figure 18, but this doesn’t deposit a decrypted version of that picture; the user must open the file.  Any other file type, including vidoes, merely display the original file name in the Wickr UI.  A user will have to open the file in order to view its contents.

The directories for files on each platform is as follows (path starts in the Wickr home directory):

Platform                                   Encrypted Files                                    Decrypted Files


Android                                    ~/files/enc                                             ~/cache/dec

macOS                                      ~/temp/attachments                           ~/temp/preview

Windows                                 ~/temp/attachments                           ~/temp/preview

This behavior applies to files both sent and received.  Also keep in mind there is a probability that you may find encrypted files with no corresponding decrypted version.  This may be due to the message retention time expiring, which is why the order of examination is important, or it may mean the user never opened the file.

For both macOS and Windows, the only way to recover message content is via a manual examination using the Wickr UI, which means that a logical image should contain the sought after data.  However, the order of your examination can impact your ability to recover any decrypted files that may be present on the device.  Since the Wickr application is keeping track of what files may have passed their message retention period, it is extremely important to check for decrypted files prior to initiating Wickr on the device for a manual examination.  Failure to do so will result in any decrypted file whose message retention time has expired being deleted.

The Android database.  Slightly different

While the databases for macOS and Windows are inaccessible, the story is better for Android.  While conducting research for this post I discovered Cellebrite Physical Analyzer was not able to decrypt the wickr_db database even though it was prompting for a password.  Cellebrite confirmed Wickr had, in fact, changed their encryption scheme and Physical Analyzer was not able to decrypt the data.  A short time later they had a solution, which allowed me to proceed with this part of the post.  While not currently available to the public, this solution will be rolled out in a future version of Physical Analyzer.  Fortunately, I was granted pre-release access to this feature.

Again, thank you Heather and Or.  🙂

While there is still a good deal of data within the wickr_db file that is obfuscated, the important parts are available to the examiner, once decrypted.  The first table of interest is “Wickr_Message.”  See Figure 19.

Figure 19.png
Figure 19.  Wickr_Message table.

The blue box is the timestamp for sent and received message (Unix Epoch) and the orange box contains the text of the message or the original file name that was either sent or received by the device.  The timestamp in the red box is the time the message will be deleted from the Wickr UI and database.  The values in the purple box are inteteresting.  Based on testing, each file I sent or received had a value of 6000 in the messageType column.  While not that important here, these values are important when discussing iOS.

The blobs in the messagePayload column are interesting in that they contain a lot of information about file transfers between devices.  See Figure 20.

Figure 20.png
Figure 20.  Message payload.

The file sender can be seen next to the red arrow, the file type (e.g., picture, document, etc.) is next to the blue arrow, the GUID assigned to the file is seen in green box.  The GUID values can be matched up to the GUIDs of the files found in the /enc and /dec folders.  Here, the GUID in the green box in Figure 20 can be seen in both folders in Figure 21.  Finally, you can see the original name of the file next to the purple arrow (iOS_screenshot).  The original file name also appears in the cachedText colum in Figure 19.

Figure 21
Figure 21.  Matching GUIDs

The orange box in Figure 20 contains the recipients username along with the hash value of the recipient’s Wickr User ID.  That value can be matched up to the value in the senderUserIDHash column in the same table (see the red box in Figure 22).  The title of this column is deceptive, because it isn’t actually the userID that is represented.

Figure 21
Figure 22.  Sender’s hashed ID

Figure 23 shows the same hash in the serverIDHash column in the table Wickr_ConvoUser table.

Figure 23
Figure 23.  Same IDs.

Also of note in this table and the one seen in Figure 22 is the column vGroupID.  Based on testing, it appears every conversation is considered to be a “group,” even if that group only has two people.  For example, in my testing I only had my two test accounts that were conversing with each other.  This is considered a “group,” and is assigned a GUID (seen in the blue box).  The good thing about this value is that it is consistent across devices and platforms, which could come in handy when trying to track down conversation participants or deleted conversations (by recovering it from another device).  An example of this cross-platform-ing is seen in Figure 24, which shows the table ZSECEX_CONVO from the Wickr database in iOS.  Note the same GroupID.

Figure 24
Figure 24.  Same group ID.

Figure 25 shows, again, the serverIDHash, but this time in the table Wickr_User.  It is associated with the value userIDHash.  The value userAliasHash (the same table) is seen in Figure 26.

Figure 25
Figure 25.  serverIDHash and the userIDHash.
Figure 26
Figure 26.  userIDHash (Part 2).

Figure 27 shows some telemetry for the users listed in this table.

Figure 27
Figure 27.  User telemetry.

The columns isHidden (purple box) and lastMessaged (red box) are self-explanatory.  The value of 1 in the isHidden column means the user does not appear in the Conversations section of the UI.  That value coupled with the value of 0 in the lastMessaged column indicates this row in the table probably belongs to the logged in account.

The lastRefreshTime column (blue box) has the same value in both cells.  The timestamp in the cell for row 1 is when I opened the Wickr app, which, undoubtedly, caused the app to pull down information from the server about my two accounts.  Whether this is what this value actually represents requires more testing.  The same goes for the values in lastActivityTime column (orange box).  The value seen in the cell in row 1 is, based on my notes, the last time I pushed the app to the background.  The interesting thing here is there was activity within the app the after the timestamp (the following day around lunch time PDT).  More testing is required in order to determine what these values actually represent.  For now, I would not trust lastActivityTime at face value.

The table Wickr_Settings contains data of its namesake.  The first column of interest is appConfiguration (red box).  See Figure 28.

Figure 28.PNG
Figure 28.  Wick_Settings

The data in this cell is in JSON format.  Figure 29 shows the first part of the contents.

Figure 29
Figure 29.  JSON, Part 1.

There are two notable values here.  The first, in the blue box, is self explanatory:  locationEnabled (Wickr can use location services).  I let Wickr have access to location services during initial setup, so this value is set to ‘true.’  The value in the red box, alwaysReauthenticate, refers to the setting that determines whether or not a user has to login into Wicker each time the app is accessed.  It corresponds to the switch in the Wickr settings seen in Figure 30 (red box).

Figure 30.PNG
Figure 30.  Login each time?  No thanks.

Because I didn’t want to be bothered with logging in each time, I opted to just have Wickr save my password and login automatically each time, thus this value is set to ‘false.’  If a user has this set and does not provide the Wickr password, a manual examination will be impossible.

The rest of the contents of the JSON data are unremarkable, and is seen in Figure 31.

Figure 31
Figure 31.  JSON, Part 2.  Nothing much.

There are three additional columns that are notable in this table.  The first is the setting for the Secure Shredder, autoShredderEnabled.  This value is set to 1, which means that it is enabled.  I would not expect to see any other value in this cell as Secure Shredder runs automatically in Wickr Me and some tiers of the Wickr Pro versions; there is no way to disable it unless the Silver, Gold, or Enterprise versions of Wickr is present.  See Figure 32.

Figure 32
Figure 32.  Anonymous Notifications, Secure Shredder, and Auto Unlock.

The second notable column is unlockMessagesEnabled (red box).  As its name implies, this setting dictates whether a message is unlocked on receipt, or if a user has to manually initiate the unlock.  I took the default setting, which is not to unlock a received message (database value of 0).  Figure 33 shows the setting in the Wickr Settings UI.

Figure 33
Figure 33.  Message Auto Unlock switch.

Figure 32 also shows anonymousNotificationsEnabled (orange box).  This setting dictates whether Wickr notifications provide any specific information about a received message/file (e.g., sender’s user name, text of a message, file name), or if the notification is generic (e.g., “You have a new message”).  Again, the default is to show generic notifications (database value of 1).  Figure 34 shows the setting in the Wickr Settings UI.  Note the switch is off, but since I have Auto Unlocks disabled, this switch is not used because my messages are not automatically unlocked on receipt.

Figure 34.PNG
Figure 34.  Anonymous Notification setting.

I want to address one last table;  Wickr_Convo.  Using the conversation GUIDs, you can determine the last activity within each conversation that is stored on the device.  In Figure 35, this conversation GUID is the same as the ones seen in Figure 23 and 25.

Figure 35
Figure 35.  Conversations listed by GUID.

There are two values that are notable.  The first is is the lastOutgoingMessageTimestamp (red box). That is a pretty self-explanatory label, right?  Not quite, and examiners should be careful interpreting this value.  That same timestamp appears in the Wickr_Message table seen in Figure 37, but with a different label.

Figure 36
Figure 36.  Wickr_Convo table timestamps.
Figure 37.png
Figure 37.  Wickr_Message table timestamps.

It appears that the lastOutgoingMessageTimestamp from Wickr_Convo applies to the last message that did not involve a file transfer (value timestamp seen in the Wickr_Message table) .  The value lastUpdatedTimestamp (blue box in Figure 36) actually represents the last communication (message or file transfer) in the conversation, which is seen in the blue-boxed timestamp value in the Wickr_Message table (blue box in Figure 37).

The value messageReadTimestamp (orange box in Figure 36) represents the time the last message was unlocked.  Notice that the value is just about the same as that seen in lastUpdatedTimestamp, but with more granularity.

A couple more things

There are two more files I’d like to touch on with regards to the Android version.  The first is com.google.android.gms.measurement.prefs.xml found in the /shared_prefs folder.  See Figure 38.

Figure 38
Figure 38.  Measurement data for the app.

This file keeps track of certain data about app usage.  The most obvious data points are the install time for the app itself (orange box) and the first time the app was opened (red box).  The next two data poins are app_backgrounded (yellow box) and last_pause_time (green box).  The app_backgrounded value, as you can see, is a boolean value that indicates whether the app is active on the device screen or if the app is running in the background (i.e. not front-and-center on the device).  The value last_pause_time is the last time the app was pushed to the background by the user (“paused”).  If an examiner is pulling this data from a seized device is highly likely that the app_backgrounded value will be true, unless the device is seized and imaged while Wickr is actively being used.

The value in the blue box, last_upload, is a deceiving value, and I have yet to figure out what exactly it represents.  I have a theory that it may be the last time the app uploaded information about its current public key which is used in the asymmetric encryption operations during message transport, but I can not be totally sure at this point.  Just know that last_upload may not necessarily represent the last time a file was uploaded.

The last file is COUNTLY_STORE.xml.  Based on research, it appears this file may be used for analytical purposes in conjunction with the Countly platform.  This file keeps some metrics about the app, including the cell service carrier, platofrm version (SDK version), hardware information, and a unique identifier, which, on Android, is the advertising ID (adid).  The data appears to be broken up into transactions with each transaction containing some or all of the data points I just mentioned. Each transaction appears to be separated by triple colons.  Each also contains a timestamp.

A representitive example can be seen in Figure 39; it does not contain all of the data points I mentioned but it gives you a good idea as what to expect.

Figure 40
Figure 39.  COUNTLY_STORE.xml in Android.

This file is inconsistent.  On some of my extractions the file was empty after app use and on others it was full of data.  Sometimes the timestamps coincided with my being in the app, and others did not.  There does not seem to be enough consistency to definatively say the timestamps seen in this file are of any use to examiners.  If someone has found otherwise please let me know.

There is a iOS equivalent:  County.dat.  This file contains most of the same data points I already described, and while it has a .dat extension, it is a binary plist file.  In lieu of the adid (from Android), a deviceID is present in the form of a GUID.  I think this deviceID serves more than one purpose, but that is speculative on my part.

Speaking of iOS…

iOS is different. Of course it is.

The iOS version of Wickr behaves a little differently, probably due to how data is naturally stored on iOS devices.  The data is already encrypted, and is hard to access.  The two biggest differences, from a forensic standpoint, are the lack of decrypted versions of opened files, and the database is not encrypted.

Before I proceed any further, though, I do want to say thank you again to Mike Williamson for his help in understanding how the iOS app operates under the hood.  🙂

I searched high and low in my iOS extractions, and never found decrypted versions of files on my device.  So there are two possible explanations:  1, they are in a place I didn’t look (highly unlikely but not completely impossible), or 2, they are never created in the first place.  I’m leaning towards the latter.  Regardless, there are no decrypted files to discuss.

Which leaves just the database itself.  While it is not encrypted, a majority of the data writtent to the table cells is encrypted.  I will say I am aware of at least two mobile device forensic vendors, who shall not be named at this time, that will probably have support for Wickr on iOS in the near future.  In the meantime, though, we are left with little data to review.

The first table is ZWICKR_MESSAGE, and, as you can guess, it contains much of the same data as the Wickr_Message table in Android.  Remember when I mentioned the messageType value in Android?  In iOS that value is ZFULLTYPE.  See Figure 40.

Figure 38
Figure 40.  Type 6000.

The value of 6000 is seen here, and, as will be seen shortly, correspond to files that have been sent/received.  Also, note the Z_PK values 8 and 10, respectively, because they will be seen in another table.

Figure 41 shows some additional columns, the titles of which are self-explanatory.  One I do want to highlight, though, is the ZISVISIBLE column.  The two values in red boxes represent messages I deleted while within the Wickr UI.  There is a recall function in Wickr, but I was not able to test this out to see if this would also place a value of 0 in this column.

Figure 39.png
Figure 41.  Deleted message indicators.

Figure 42 shows another set of columns in the same table.  The columns ZCONVO and Z4_CONVO actually come from a different table within the database, ZSECEX_CONVO.  See Figures 42 and 43.

Figure 40.png
Figure 42.  Conversation and Calls.
Figure 41
Figure 43.  ZSECX_CONVO table.

In Figure 42 the two columns highlighted in the orange box, ZLASTCALLCONVO and Z4_LASTCALLCOVO, appear to keep track of calls made via Wickr; in my case these are audio calls.  Here, the value indicates the last call to take place, and what conversation it occured in.  This is interesting since the Android database did not appear to keep track of calls as far as I could tell (the data may have been encrypted).  Remember, this table is equivalent to the Wickr_ConvoUser table in the Android database, so you will be able to see the ZVGROUPID, shortly.

The next bit of the table involves identifying the message sender (ZUSERSENDER), the timestamp of the message (ZTIMESTAMP), the time the message will expire (ZCLEANUPTIME), and the message identifier (ZMESSAGEID).  The timestamps in this table are stored in Core Foundation Absolute Time (CFAbsolute).  See Figure 44.

Figure 42.png
Figure 44.  Messages and their times.

The values in the ZUSERSENDER column can be matched back to the Z_PK column in the ZSECX_USER table.

That’s it for this table!  The rest of the contents, including the ZBODY column, are encrypted.

The ZSECX_CONVO table has some notable data as seen in Figure 45.  The one column I do want to highlight is ZLASTTIMESTAMP, which is the time of the last activity (regardless of what it was) in the conversation (the “group”).  Interestingly, the times here are stored in Unix Epoch.

Figure 43.png
Figure 45.  Last time of activity in a conversation (group).

Figure 46 shows some additional data.  The last conversation in which a call was either placed or received is seen in the column ZLASTCALLMSG (orange box – timestamp can be gotten from the ZWICKR_MESSAGE table), along with the last person that either sent/received anything within the conversation  (ZLASTUSER – red box). The value in the ZLASTCALLMSG column can be matched back to the values in the Z_PK column in the ZWICKR_MESSAGE table.  The value in the ZLASTUSER column can be matched back to the Z_PK column in the ZSECX_USER table. And, finally, as I previously showed in Figure 24, the ZVGROUPID (blue box).

Figure 44
Figure 46.  The last of the ZSECX_CONVO table.

The table ZSECEX_USER, as seen in Figures 47 and 48, contains data about not only the account owner, but also about users who the account holder may be conversing with.  The table contains some of the same information as the Wickr_User table in Android.  In fact, Figure 47 looks very similar to Figure 27.  The values represent the same things as well.

Figure 47
Figure 47.  Hidden status and last activity time.

Figure 48 shows the same items as seen in Figure 26, but, as you can see, the hash values are different, which makes tracking conversation participants using this informaiton impossible.

Figure 48
Figure 48.  Same participants, different hashes.

File transfers in iOS are a bit tricky because some of the data is obfuscated, and in order to figure out which file is which an examiner needs to examine three tables:  Z_11MSG, ZWICKR_MESSAGE, and ZWICKR_FILE.  Figure 49 shows the Z_11MSG table.

Figure 49
Figure 49.  Z_11MSG.

The colum Z_13MSG refers to the ZWICKR_MESSAGE table, with the values 8 and 10 referring to values in the Z_PK column in that table.  See Figure 50.

Figure 50
Figure 50.  Trasferred files.

Obviously, associated timestamps are found in the same row further into the table.  See Figure 51.

Figure 51
Figure 51.  Timestamps for the transferred files.

The column Z_11Files in Figure 49 refers to the ZWICKR_FILE table.  See Figure 52.

Figure 52
Figure 52.  Files with their GUIDs.

The values in Z_11FILES column in Figure 49 refer the values in the Z_PK values seen in Figure 52.  Figure 53 shows the files within the file system.  As I previously mentioned, there are no decrypted versions of these files.

Figure 53
Figure 53.  The file GUIDs from the database table.

Figure 54 shows values ZANONYMOUSNOTIFICATION and ZAUTOUNLOCKMESSAGES values  from the ZSECEX_ACCOUNT table (the Android values were seen in Figure 32).  Both values here are zero meaning I had these features turned off.

Figure 54
Figure 54.  Anonymous Notification and Auto Unlock settings in iOS.

The last table I want to highlight is the ZSECX_APP table.  See Figure 55.

Figure 55
Figure 55.  Users and their associated app installations.

The values in the ZUSER column relate back to the values seen in the Z_PK column in the ZWICKR_USER table.  Each different value in the ZAPPIDHASH represents a different app install on a device.  For example, Test Account 1 appeared on four different devices (iPhone, iPad, Windows, macOS).  This means four different devices each with their own individual installation of Wickr, which translates to a different ZAPPIDHASH value for each individual device.  Knowing a user has multiple devices could be beneficial.  Warning:  be careful, because this isn’t the only way to interpret this data.

As part of the testing, I wanted to see if this value could change on a device, and, as it turns out, it can.  Test Account 2 was only logged in on the Pixel 3.  I installed the app, used it, pulled the data, wiped the Pixel and flashed it with a new install of Android, and then reinstalled Wickr.  I repeated those steps one more time, which means Wickr was installed on the same device three different times, and, as you can see, there are three different hash values for ZUSER 2 (Test Account 2).

The morale of this story is that while this value can possibly represent different devices where a user may be logged in, it actually represents instances of app installation, so be careful in your interpretation.

Conclusion

Wickr is a tough one.  This app presents all sorts of forensic challenges.  At the moment there is very little data that is recoverable, but some insights about communication and app usage can be gleaned from what little data is available.  Sometimes, files can be recovered, and that may be all an examiner/investigator needs.

The good news is, though, there is help on the horizon.

Google Assistant Butt Dials (aka Accidental & Canceled Invocations)

Last week I was at DFRWS USA in Portland, OR to soak up some DFIR research, participate in some workshops, and congregate with some of the DFIR tribe. I also happen to be there to give a 20 minute presentation on Android Auto & Google Assistant.

Seeing how this was my first presentation I was super nervous and I am absolutely sure it showed (I got zero sleep the night before). I also made the rookie mistake of making WAY more many slides than I had time for; I do not possess that super power that allows some in our discipline to zip through PowerPoint slides at superhuman speeds. The very last slide in the deck had my contact information on it which included the URL for this blog. Unbeknownst to me, several people visited the blog shortly after my presentation and read some of the stuff here. Thank you!

As it turns out, this happened to generate a conversation. On one of the breaks someone came up to me and posed a question about Google Assistant. That question led to other conversations about Assistant, and another question was asked: what happens when a user cancels whatever action they wanted Google Assistant to do when they first invoked it?

I had brought my trusty Pixel 3 test phone with me on this trip for another project I am working on, so I was able to test this question fairly quickly with a pleasantly surprising set of results. The Pixel was running Android Pie with a patch level of February 2019 that had been freshly installed a mere two hours earlier. The phone was not rooted, but did have TWRP (3.3.0) installed, which allowed me to pull the data once I had run my tests.

The Question

Consider this scenario: a user not in the car calls on Google Assistant to send a text message to a recipient. Assistant acknowledges, and asks the user to provide the message they want to send. The user dictates the message, and then decides, for whatever reason they do not want to send it. Assistant reads the message back to the user and asks what the user wants to do (send it or not send it). The user indicates they want to cancel the action, and the text message is never sent.

This is the scenario I tested. In order to envoke Google Assistant I used the Assistant button in the right side of the Google Quick Search bar on the Android home screen. My dialogue with Google Assistant went as follows:

Me: OK, Google. Send a message to Josh Hickman

GA: Message to Josh Hickman using SMS. Sure. What’s the message?

Me: This is the test message for Google Assistant, period (to represent punctuation).

GA: I got “This is a test message for Google Assistant.” Do you want to send it or change it?

Me: Cancel.

GA: OK, no problem.

If you have read my blog post on Google Assistant when outside of the car you know where the Google Assistant protobuf files are located, and the information they contain, so I will skip ahead to examining the file that represented this session.

The file header that reports where the protobuf file comes from is the same as before; the “opa” is seen in the red box. However, there is a huge difference with regards to the embedded audio data in this file. See Figure 1.

Figure 1
Figure 1.  Same header, different audio.

In the blue box there is a marker for Ogg, a container format that is used to encapsulate audio and video files. In the orange box is a marker for Opus, which is a lossy audio compression codec. It is designed for interactive speech and music transmission over the Internet and is considered to be high quality audio, which makes it prime to send Assistant audio across limited bandwidth connections.  Based on this experiment and data in the Oreo image I released a couple of months ago, I believe Google Assistant may be using Opus now instead of the LAME codec.  The takeaway here is to just be aware you may see either.

In the green box is the string “Google Speech using libopus.” Libopus is the method by which audio is encoded in Opus. Since this was clearly audio data, I treated it just like the embedded MP3 data I had previously seen in other Google Assistant protobuf files. I carved from the Ogg marker all the way down until I reached a series of 0xFF values just before a BNDL (see the previous Google Assistant posts about BNDL). I saved the file out with no extension and opened it with VLC Player. The following audio came out of my speakers:

“OK, no problem.”

This is the exact behavior I had see before in Google Assistant protobuf files: the file contained the audio of the last thing Google Assistant said to me, so this behavior was the same as before.

However, in this instance my request (to send a message) had not been passed to a different service (the Android Messages app) because I had indicated to Assistant that I did not want to send the message (my “Cancel” command). I continued search the file to see if the rest of my interaction with Google Assistant was present.

Figure 2 shows an area a short way past the embedded audio data. The area in the blue box should be familiar to those who read my previous Google Assistant. The hexadecimal string 0xBAF1C8F803 appears just before the first vocal input (red box) that appears in this protobuf file. The 8-byte string seen in the orange box, while not not exactly what I had seen before, had bytes that were the same (the leading 0x010C and trailing 0x040200). Either way, if you see this, get ready to see the text of some of the user’s vocal input.

Figure 2
Figure 2.  What is last is first.

So far, this pattern was exactly as I had seen before: what was last during my session with Google Assistant was first in the protobuf file. So I skipped a bit of data because I know the session data that followed dealt with the last part of session. If the pattern holds, that portion of the session will appear again towards the end of the protobuf file.

I navigated to the portion seen in Figure 3. Here I find a 16-byte string which I consider to be a footer for what I call vocal transactions. It marks the end of the data for my “Cancel” command; you can see the string in the blue box. Also in Figure 3 is the 8-byte string that I saw earlier (that acts as a marker for the vocal input) and the text of the vocal input that started the session (“Send a message to Josh Hickman”).

Figure 3
Figure 3.  The end of a transaction and the beginning of another.

Traveling a bit further finds the two things of interest. The first is data that indicates how the session was started (via pressing the button in the Google Quick Search Box – I see this throughout the files in which I invoked Assistant via the button), which is highlighted in the blue box in Figure 4. Figure 4 also has a timestamp in it (red box). The timestamp is a Unix Epoch timestamp that is stored little endian (0x0000016BFD619312). When decoded, the timestamp is 07/16/2019 at 17:42:38 PDT (-7:00), which can be seen in Figure 5. This is when I started the session.

Figure 4
Figure 4.  A timestamp and the session start mechanism.
Figure 5
Figure 5.  The decoded timestamp.

The next thing I find, just below the timestamp, is a transactional GUID. I believe this GUID is used by Google Assistant to keep vocal input paired with the feedback that the input generates; this helps keep a user’s interaction with Google Assistant conversational. See the red box in Figure 6.

Figure 6
Figure 6.  Transactional GUID.

The data in the red box in Figure 7 is interesting and I didn’t realize its significance until I was preparing slides for my presentation at DFRWS. The string 3298i2511e4458bd4fba3 is the Lookup Key associated with the (lone) contact on my test phone, “Josh Hickman;” this key appears in a few places. In the Contacts database (/data/data/com.android.providers.contacts/databases/contact2.db) the key appears in the contacts, view_contacts, view_data, and view_entities tables. It also appears in the Participants table in the Bugle database (/data/data/com.google.android.messages/databases/bugle.db), which is the database for the Android Messages app. See Figures 7, 8, & 9.

Figure 7
Figure 7.  The lookup key in the protobuf file.
Figure 8.PNG
Figure 8.  The participants table entry in the bugle.db.
Figure 9
Figure 9.  A second look at the lookup key in the bugle.db.

There are a few things seen in Figure 10. First is the transactional GUID that was previously seen in Figure 6 (blue box). Just below that is the vocal transaction footer (green box), the 8-byte string that marks vocal input (orange box), and the message I dictated to Google Assistant (red box). See Figure 10.

Figure 10.png
Figure 10.  There is a lot going on here.

Figure 11 shows the timestamp in the red box. The string, read little endian, decodes to 07/17/2019 at 17:42:43 PDT, 5 seconds past the first timestamp, which makes sense that I would have dictated the message after having made the request to Google Assistant. The decoded time is seen in Figure 12.

Figure 11
Figure 11.  Timestamp for the dictated message.
Figure 12.png
Figure 12.  The decoded timestamp.

Below there is the transactional GUID (again, previously seen in Figure 6) associated with the original vocal input in the session. Again, I believe this allows Google Assistant to know that this dictated message is associated with the original request (“Send a message to Josh Hickman”). This allows Assistant to be conversational with the user. See the red box in Figure 13.

Figure 13.png
Figure 13.  The same transactional GUID.

Scrolling through quite a bit of protobuf data finds the area seen in Figure 14. Here I found the vocal transaction footer (blue box), the 8-byte vocal input marker (orange box) and the vocal input “Cancel” in the red box.

Figure 14.png
Figure 14.  The last vocal input of the session.

Figure 15 shows the timestamp of the “Cancel;” it decodes to 07/17/2019 at 17:42:57 PDT (-7:00). See Figure 16 for the decoded timestamp.

Figure 15.png
Figure 15.  The “Cancel” timestamp.
Figure 16
Figure 16.  The decoded “Cancel” timestamp.

The final part of this file shows the original transactional GUID again (red box), which associates the “Cancel” with the original request. See Figure 17.

Figure 17
Figure 17.  The original transactional GUID…again.

After I looked at this file, I checked my messages on my phone and the message did not appear in the Android Messages app. Just to confirm, I pulled my bugle.db and the message was nowhere to be found. So, based on this, it is safe to say that if I change my mind after having dictated a message to Google Assistant the message will not show up in the database that holds messages. This isn’t surprising as Google Assistant never handed me off to Android Messages in order to transmit the message.

However, and this is the surprising part, the message DOES exist on the device in the protobuf file holding the Google Assistant session data. Granted, I had to go in and manually find the message and the associated timestamp, but it is there. The upside to the manual parsing is there is already some documentation on this file structure to help navigate to the relevant data. 🙂

I also completed this scenario by invoking Google Assistant verbally, and the results were the same. The message was still resident inside of the protobuf file even though it had not been saved to bugle.db.

Hitting the Cancel Button

Next, I tried the same scenario but instead of telling Google Assistant to cancel, I just hit the “Cancel” button in the Google Assistant interface. Some users may be in a hurry to cancel a message and may not want to wait for Assistant to give them an option to cancel, or they are interrupted and may need to cancel the message before sending it.

I ran this test in the Salt Lake City, UT airport, so the time zone was Mountain Daylight Time (MDT or -6:00). The conversation with Google Assistant went as so:

Me: Send a text message to Josh Hickman.

GA: Message to Josh Hickman using SMS. Sure. What’s the message?

Me: This is a test message that I will use to cancel prior to issuing the cancel command.

*I pressed the cancel button in the Google Assistant UI*

Since I’ve already covered the file structure and markers, I will skip those things and get to the relevant data. I will just say the structure and markers are all present.

Figure 18 shows the 8-byte marker indicating the text of the vocal input is coming (orange box) along with the text of the input itself (red box). The timestamp seen in Figure 19 is the correct timestamp based on my notes: 07/18/2019 at 9:25:37 MDT (-6:00).

Figure 18.png
Figure 18.  The request.
Figure 19.png
Figure 19.  The timestamp.
Figure 20
Figure 20.  The timestamp decoded.

Just as before the dictated text message request was further up in the file, which makes sense here because the last input I gave Assistant was the dictated text message. Also note that there are variants of the dictated message, each with their own designations (T, X, V, W, & Z). This is probably due to the fact that I was in a noisy airport terminal, and, at the time I dictated the message, there was an announcement going over the public address system. See Figure 21 for the message and its variants, Figure 22 for the timestamp, and Figure 23 for the decoded timestamp.

Figure 21
Figure 21.  The dicated message with variants.
Figure 22.png
Figure 22.  The timestamp.
Figure 23.png
Figure 23.  The decoded timestamp.

As I mentioned, I hit the “Cancel” button on the screen as soon as the message was dictated. I watched the message appear in the Google Assistant UI, but I did not give Assistant time to read the message back to me to make sure it had dictated the message correctly. I allowed no feedback whatsoever. Considering this, the nugget I found in Figure 24 was quite the surprise.

Figure 24
Figure 24.  The canceled message.

In the blue box you can see the message in a java wrapper, but the thing in the red box…well, see for yourself. I canceled the message by pressing the “Cancel” button, and there is a string “Canceled” just below the message. I tried this scenario again by just hitting the “Home” button (instead of the “Cancel” button in the Assistant UI), and I got the same result. The dictated message was present in the protobuf file, but this time the message did not appear in a java wrapper, The “Canceled” ASCII string was just below an empty wrapper. See Figure 25.

Figure 25
Figure 25.  Canceled.  Again.

So it would appear that an examiner may get some indication a session was canceled prior to Google Assistant getting a chance to either complete the action of sending a message or Google Assistant getting a “Cancel” command. Obviously, there are multiple scenarios in which a user could cancel a session with Google Assistant, but having “Canceled” in the protobuf data is definitely a good indicator. The drawback, though, is there is no indication how the transaction was canceled (e.g. by way of the “Cancel” button or hitting the home button).

An Actual Virtual Assistant Butt Dial

The next scenario I tested involved me simulating what I believe to be Google Assistant’s version of a butt-dial. What would happen if Google Assistant was accidentally invoked? By accidentally I mean by hitting the button in the Quick Search Box by accident, or by saying the hot word without intending to call on Google Assistant. Would Assistant record what the user said? Would it try to take any action even though there was probably no actionable items, or would it freeze and not do anything? Would there be any record of what the user said, or would Assistant realize what was going on, shut itself off, and not generate any protobuf data?

There were two tests here with the difference being in the way I invoked Assistant. One was by button and the other by hot word. Since the results were the same I will show just one set of screen shots, which are from the scenario in which I pressed the Google Assistant button in the Quick Search Bar (right side). I was in my hotel room at DFRWS, so the time zone is Pacific Daylight Time (-7:00) again. The scenario went as such:

*I pressed the button*

Me: Hi, my name is Josh Hickman and I’m here this week at the Digital Forensic Research Workshop. I was here originally…

*Google Assistant interrupts*

GA: You’d like me to call you ‘Josh Hickman and I’m here this week at the digital forensic research Workshop.’ Is that right?

*I ignore the feedback from Google Assistant and continue.*

Me: Anyway, I was here to give a presentation and the presentation went fairly well considerIng the fact that it was my first time here…

*Google Assistant interrupts again*

GA: I found these results.

*Google Assistant presents some search results for addressing anxiety over public speaking…relevant, hilarous, and slightly creepy.*

As before, I will skip file structure and get straight to the point.

The vocal input is in this file. Figure 26 shows the vocal input and a variant of what I said (“I’m” versus “I am”) in the purple boxes. It also shows the 5-byte marker for the first vocal input in a protobuf file (blue box) along with the 8-byte marker that indicates vocal input is forthcoming (orange box).

Figure 26.png
Figure 26.  The usual suspects.

Just below the area in Figure 26 is the timestamp of the session. The time decodes to 07/17/2019 at 11:51:06 PDT (-7:00). See Figure 27.

Figure 27.png
Figure 27.  Timestamp.
Figure 28
Figure 28.  Decoded timestamp.

Figure 29 shows my vocal input wrapped in the java wrapper.

Figure 29.png
Figure 29.  My initial vocal input, wrapped.

Interestingly enough, I did not find any data in this file related to the second bit of input Google Assistant received, the fact that Google Assistant performed a search, or what search terms it used (or thought I gave it). I even went out to other protobuf files in the app_session folder to see if a new file was generated. Nothing.

Conclusion

This exercise shows there is yet one more place to check for messages in Android.  Traditionally, we have always thought to look for messages in database files.  What if the user composed a message using Google Assistant?  If the user actually sends the message, the traditional way of thinking still applies.  But, what if the user changes their mind prior to actually sending those dictated messages?  Are those messages saved to a draft folder or some other temporary location in Messages?  No, it is not.  In fact, it is not stored any other location that I can find other than the Google Assistant protobuf files (if someone can find them please let me know).     The good news is if a message is dictated using Assistant and the user cancels the message, it is possible to recover the message that was dicated but never sent.  This could give further insight into the intent of a user and help recover even more messges.  It also gives a better picture of how a user actually interacted with their device.

The Google Assistant protobuf files are continuing to suprise me in regards to how much data they contain.  At this year’s I/O conference Google annouced speed improvements to Assistant along with their intention to push more of the natural language processing and machine learning functions on to the devices instead of having everything done server-side.  This could be advantageous in that more artifacts could be left behind by Assistant, which would give a more wholelistic view of device usage.

Me(n)tal Health in DFIR – It’s Kind of a Big Deal

When I initially started this blog I set a modest goal of making one post a month with the understanding that sometimes life will happen and take priority. Well, life is happening for me this month: an imminent house move, an upcoming presentation at DFRWS USA, the GCFE, and several cases at work have kept me extremely busy. With all that going on there has been absolutely zero time for any research. Being the stubborn person I am, though, I couldn’t NOT post something, so here we are. Fortunately, there are no screenshots this month. 🙂

A few days ago I was cruising around the DFIR Discord channel when someone asked an important question. The question was this: how are examiners/investigators who are exposed to child sexual exploitation material (i.e. child pornography) given mental health support, if any. The few replies that came were all over the place. Some responses indicated they received zero support, others got what I would consider partial support, and one responder indicated they got a lot of support.

Why?

I have the unfortunate experience of being exposed to this material at my current job assignment, and have been for several years now due to past job assignments. No one wants to see it, be around it, or be around individuals who willingly seek out this material. This material doesn’t magically appear out of thin air; it has to be created, which means a child has to be sexually exploited. This is against the law. Period.

Viewing these acts is…terrible.

In addition to the social implications, there is a societal need to investigate people who possess, distribute, and create this material. These investigations are mentally taxing because the material is tough to look at, plain and simple. But, the investigations have to be done. There is no way around it. The well-being of a child is at stake.

The subject matter of these investigations require a special kind of person to do them. I cannot tell you how many times I have had seasoned investigators say to me “I don’t know how you do it. I would jump across the table and kill them.” The thing is that I believe that they would do just that. Investigators/examiners are human, and, just like everyone else, we are all wired differently. Certain things may trigger a severe emotional response in one investigator/examiner, and not trigger a severe emotional response in another. Investigators/examiners who do these types of investigations/examinations have to have a particular mindset. Having done all kinds of criminal investigations and examinations for various criminal offenses, I can tell you, for example, that there is a difference in mindset between dealing with a homicide suspect and an individual who peddles in this material.

Investigators/examiners who are exposed to this material have to keep severe emotional responses in check in order to remain professional and do their job, and it takes a lot of mettle to do this. That mental effort, along with being repeatedly exposed to this material, takes a toll on the mind and the heart. I have seen colleagues crumble under the mental and emotional stress caused by these investigations/examinations, and walk away from investigations/digital forensics. I even had a co-worker take their own life.

And the need for mental fortitude doesn’t just apply to law enforcement investigators/examiners. The private sector has its own set of stressors that takes a mental and physical toll on DFIR personnel that operate in that arena. Long hours, being away from family/friends, conflicting priorities, deadlines and employer/peer expectations can all introduce stress and cause the mind to buckle and suffer.

And, if you think the non-law enforcement DFIR people don’t see some disturbing material, you are wrong. Digital devices act as a sort of safe for the mind (in addition to being the bicycle Steve Jobs liked to talk about), so people will store valuable things in them. Sometimes these valuable things may have a (negative) social stigma associated with them, and the owner wants to keep them secret, afraid that someone will find out their secret. DFIR practitioners who operate in the private/non-law enforcement sector will find this stuff, and while it may not be unlawful to posses the material, it may still be disturbing, so viewing it takes a toll.

I will add that this discussion also applies to those who conduct forensic audio/video examinations. Our team does those exams, too. We have the unfortunate experience, at times, of watching/listening to a person die or be seriously injured or maimed. Audio/video examinations are some of the toughest we do because we actually see/hear the event.

It Doesn’t Have To Be This Way

There have been a few DFIR blog posts published in the past few months that have addressed burnout/mental health in our discipline, so I am not going to re-hash what they have said. They are good articles, and DFIR folks should read them. If you are interested, they are:

Christa Miller (Forensic Focus) – Burnout in DFIR (And Beyond)

Brett Shavers – Only Race Cars Should Burnout

Thom Langford – Drowning, Not Waving

If you are struggling, seek help. Just know that you are not the only one, and there are resources out there to help you, including others in the DFIR community; generally speaking, we are a supportive bunch. Even if your employer doesn’t offer support, the DFIR community will.

One of the responses I saw in the Discord channel indicated that there is a negative connotation around seeking out help for mental health. I understand that because I have worked in environments where expressing mental/emotional distress was seen as a sign of weakness among peers and supervisors. However, I was fortunate enough to find my way into an environment where mental health is taken seriously and when people were in distress (expressed or not), peers and supervisors listened and took action to help. The few responses I saw made me think environments like mine are the exception and not the rule. I hope I am wrong.

The thing is, it doesn’t have to be that way.

What To Do?

I am not a health professional, so I don’t know the answer to the question or if there even IS an answer.

However, I do know mental health is important, in both DFIR and non-DFIR careers. Even for those of us DFIR’ers who are not exposed to child sexual exploitation material on a regular basis, the other major stressors I previously mentioned can have a negative impact on mental health (see Thom’s article above). Our minds are subjected to so much, it would make sense to have someone check it from time to time.

To use Brett Shaver’s car analogy, it would be silly to not take your car in for a maintenance checkup after an extended period of use. Why would you not give your mind the same checkup by someone who is licensed to do so? We do that for our physical bodies (most of us do, anyway), so why not for the mind? Our minds and bodies are symbiotic just like the systems in a car; a change in one can affect the other, good or bad. If your mind starts to break down due to ongoing mental stress, it can have a negative impact on your physical health…just like a breakdown in one system in a car can negatively impact other systems in the car. This impacts overall performance. A breakdown in your mind can have the same effect on your physical health, job performance, personal habits, and interpersonal relationships.

I have been in supportive environments, and am now responsible for not only maintaining that type of environment, but looking after team members’ well-being. Their families have entrusted my organization with their well-being, and my organization has delegated that responsibility to me. Those of you who supervise a DFIR team have the same responsibility, whether you realize it or not. Sure, one more thing to be responsible for, but guess what. You are in THE seat, and this is extremely important.

For those of you who are not supervisors, you should be looking out for your colleagues, and that includes your supervisor. I have tried to establish a relationship with my fellow team members that encourages free flowing communication, regardless of whether it is positive or negative, and I have experienced both. I would like to think they would come to me if they noticed a change in my behavior.

Again, I am not a health professional, and I am not sure there is a one-size-fits-all answer to how an organization effectively deals with mental health issues for DFIR. That being said I thought I would share what my organization does to try and keep a healthy environment for its DF examiners (we have no incident response function). What we do may work for other organizations, it may not, but I do want to show that it can be done.

An Example

The first thing, and I think this probably the most important, is that we have agency buy-in. If we did not have support from our administration, the rest of what we do would not happen. They fully support what we do and they recognize that happy employees are not only productive employees, but employees that are more likely to stay than to leave. What does that support entail? Well, they provide the funding and approve policies. Without those two things, it would have been impossible to do anything. Again, this applies to my organization, which happens to be 400-ish strong (only three of us are DF). If your agency is small and not very bureaucratic, you may have an easier time with this.

Policies. Some may roll their eyes at them, despise them, or completely ignore them. Regardless of you feelings toward them, they work for the purposes here. Our policy requires….requires…that our examiners go see a licensed psychologist at least once a year and the organization pays for the visit. (Update: this is separate from the employee assistance program, or EAP). Having this in the policy puts the agency on the hook, so to speak, and my organization is completely ok with that. Again, they fully support the mission and the employees who carry out that mission. By making the visit mandatory in a policy, it inoculates it (somewhat) from budget shortfalls which we encounter from time to time.

If a DF employee requests to go to see a licensed psychologist after/before their annual visit because they feel they are struggling, we send them, and the organization pays for it, no questions asked. Any examination (regardless of what it is for) can suddenly hit an examiner the wrong way at the wrong time and have a detrimental effect on their mental health. We realize that, thus we do not tell the employee “Can’t this wait until your scheduled visit?” No, we send them as quickly as we can get an appointment. Again, this is separate from EAP.

Along those same lines, we also realize that an examination may not have a contemporaneous emotional effect, and that it can take a while for the emotional distress to manifest itself to the point the examiner realizes there is a problem, or others notice a change. Again, this is why we do not lock them in to a set schedule.

There is a second part of this. Sometimes we carry our work home with us. If we are struggling at work, we can carry that home with us, and that can start to wear on our family members/significant others who live in the home with us. Our policy allows for a DF spouse to go see a licensed psychologist, too. They may need help helping the examiner cope, or they may need to offload what the examiner offloads on them. Just like the examiner, the spouse can go multiple times if needed, and, the agency pays for it.

Meet Our Lady

img_0176

Who in DFIR doesn’t like dog pictures? Well, this isn’t just any random dog. Meet Lady. She is the therapy K-9 that is attached to our team. Lady is considered a working K-9, just like a K-9 who detects narcotics or explosives, so the usual rules apply to her (e.g. no people food). She is considered an employee; she has an identification badge, a uniform, and an entry in the employee directory.

Just like other working dogs, Lady lives with her handler, who is a member of our DF team. She is a part of our family, and we treat her as such.

Lady came to us by way of the Paws and Stripes program at the Brevard County, Florida Sheriff’s Office. I will not get in to the specifics of that program, but just know she came to us after having undergone four months of training at the program site. We have a separate policy that addresses Lady. It addresses things such as her medical care, food, lodging, grooming, appearance, the person who is responsible for Lady (her handler), and certification requirements. Just as an example, my organization pays for all food and medical care so as long as she is able to serve in her official capacity. In the event she is not able to serve, she retires from service. The Director of my organization has the final say-so about with whom she retires, but, in keeping with standards, she would probably retire with her handler. Once that occurs, the handler absorbs the cost of food, but my organization will continue to pay for medical care for Lady until her death. We believe Lady is around 2 years old (she was rescued from a shelter), so we plan on her being with us for a LONG time.

Lady is a certified therapy K-9, and is certified through the Alliance of Therapy Dogs. You can read more about that organization and the certification requirements here.

In my opinion, this K-9 program is money well-spent. The mental health benefits Lady provides really is incalculable. Not only to the DF examiners, but to the organization as a whole. For the DF examiners she can be a pleasant distraction; whether it’s to take her out to potty, or to just toss a ball or frisbee, she can provide a short, necessary, and welcome distraction from tough examinations. Lady is intuitive, too. She can sense if someone is having a hard time, and happily go apply a wet nose to a leg or hand to get your attention, which gets you out from behind your workstation and not thinking about your exam.

The budget for Lady is modest compared to other costs in my organization. We budget around $1600 (USD) per year, but we have yet to come close to tapping that whole pot of money. If we were to lose an examiner due to mental health issues, we would have to spend time recruiting and hiring (my hourly salary plus the others involved) and training (DFIR training is not cheap) a replacement. From a financial perspective, Lady is “spend a little money up front, save a lot of money later.” By investing in Lady, we invest in the mental health of our examiners.

Here’s a picture of Lady hard at work….or not. I promise she has beds scattered all throughout our work areas (along with toys).

And here is a picture of her when she visited a medical facility over the holidays (periodic therapy visits outside of work are a requirement of her certification).

And the last one (I feel like a parent). One of our team members rides his motorcycle into work when the weather is nice. Lady randomly hopped up there one afternoon (she wasn’t allowed to ride on the bike).

From a supervisory standpoint there are a couple of things that I do to help with mental health. A small thing is rotating examination types. In other words, if an examiner has had a tough examination, I will assign a not-so-tough subject matter examination after that (“not-so-tough”, of course is subjective). For example, if an examiner had a child sexual exploitation examination, I try to assign something else other than a child sexual exploitation to that examiner for their next exam or two. Sometimes, our case queue will not allow for this, but I am monitoring what exam types they are working and doing what I can from that angle.

Another small thing that I do is leave my door open as much as I can, i.e. I have an open door policy. Usually every morning the team stops by the office, coffee in hand, and have a seat. We discuss current examinations and any issues that have risen during those examinations. A lot of times we are trading ideas on ways to overcome those issues. We also discuss other ancillary subjects and non-work related matters, too. I appreciate that communication and exchange of ideas. I typically learn something from those discussions, too. I will note, that this is not a required meeting…it just happens, and it may happen again, spontaneously, throughout the workday.

While I am invested in and appreciative of our daily discussions, these discussions also serve another purpose: I get a chance to observe the team. Is there any change in their mood or behavior that I can detect? Have they said anything that gives me cause for concern? Are they passively expressing some type of emotional distress? Does any change I detect coincide with a current or recent examination they have conducted? I am looking and listening for these things. As I mentioned before, their families have lent them to the citizens of our state via our organization to deal with some of the toughest subject matters in the criminal justice system. I would be remiss if I didn’t take their well-being to heart.

We try to go out for a team dinner, off-site, after hours every so often. The team usually leaves a little early and heads to the location, and I stay behind for a bit and meet them. We’ll discuss a few work-related matters and then we officially go off the clock. Work is done, and so is our discussion of it. I will say that schedules have been all over the place as of late so we are a bit off schedule.  This happens.  

Encouraging team members to not feel bad when taking time off from work is something I have noticed that I have to do every so often. I usually have to do this when something unexpected arises and causes a team member to request leave on short notice. Life happens…to all of us…at some point during our career. Whether you work in DFIR or not, things will happen outside of your work that will require you to divert your focus and energy from your work to that thing, whatever it is. Diverting like that requires time away from work, and that’s ok. That’s what paid time off (PTO) is for.

Conclusion

I hope readers find this helpful.  Mental health in our field is an important subject, and it is one that I don’t think gets talked about enough.  If you have any questions about our program or anything else, please feel free to reach out; I am responsive to communication through the site. 

Mental health is something that impacts all of us in DFIR.  It is important that we recognize that and to take steps to foster environments in which mental health is taken seriously and not dismissed.

Take care of yourselves, and each other.

Two Snaps and a Twist – An In-Depth (and Updated) Look at Snapchat on Android

 

There is an update to this post. It can be found after the ‘Conclusion’ section.

I was recently tasked with examining a two-year old Android-based phone which required an in-depth look at Snapchat. One of the things that I found most striking (and frustrating) during this examination was the lack of a modern, in-depth analysis of the Android version of the application beyond the tcspahn.db file, which, by the way, doesn’t exist anymore, and the /cache folder, which isn’t really used anymore (as far as I can tell). I found a bunch of things that discussed decoding encrypted media files, but this information was years old (Snapchat 5.x). I own the second edition of Learning Android Forensics by Skulkin, Tyndall, and Tamma, and while this book is great, I couldn’t find where they listed the version of Snapchat they examined or the version of Android they were using; what I found during my research for this post did not really match what was written in their book. A lot of things have changed.

Googling didn’t seem to help either; I just kept unearthing the older research. The closest I got was a great blog post by John Walther that examined Snapchat 10.4.0.54 on Android Marshmallow. Some of John’s post lined up with what I was seeing, while other parts did not.

WHAT’S THE BIG DEAL?

Snapchat averages 190 million users daily, which is just under half of the U.S. population, and those 190 million people send three billion snaps (pictures/videos) daily. Personally, I have the app installed on my phone, but it rarely sees any usage. Most of the time I use it on my kid, who likes the filters that alter his voice or requires that he stick out his tongue. He is particularly fond of the recent hot dog filter.

One of the appealing things about Snapchat is that direct messages (DMs) and snaps disappear after a they’re opened. While the app can certainly be used to send silly, ephemeral pictures or videos, some people find a way to twist the app for their own nefarious purposes.

There has been plenty written in the past about how some traces of activity are actually recoverable, but, again, nothing recent. I was surprised to find that there was actually more activity-related data left behind than I thought.

Before we get started just a few things to note (as usual). First, my test data was generated using a Pixel 3 running Android 9.0 (Pie) with a patch level of February 2019. Second, the version of Snapchat I tested is 10.57.0.0, which was the most current version as of 05/22/2019. Third, while the phone was not rooted, it did have TWRP, version 3.3.0-0, installed. Extracting the data was straight forward as I had the Android SDK Platform tools installed on my laptop. I booted into TWRP and then ran the following from the command line:

adb pull /data/data/com.snapchat.android

That’s it. The pull command dropped the entire folder in the same path as where the platform tools resided.

As part of this testing, I extracted the com.snapchat.android folder five different times over a period of 8 days as I wanted to see what stuck around versus what did not. I believe it is also important to understand the volatility of the data that is provided in this app. I think understanding the volatility will help investigators in the field and examiners understand exactly how much time, if any, they have before the data they are seeking is no longer available.

I will add that I tested two tools to see what they could extract: Axiom (version 3.0) and Cellebrite (UFED 4PC 7.18 and Physical Analyzer 7.19). Both tools failed to extract (parsing not included) any Snapchat data. I am not sure if this is a symptom of these tools (I hope not) or my phone. Regardless, both tools extracted nothing.

TWO SNAPS AND…SOME CHANGE

So, what’s changed? Quite a bit as far as I can tell. The storage location of where some of the data that we typically seek has changed. There are enough changes that I will not cover every single file/folder in Snapchat. I will just focus on those things that I think may be important for examiners and/or investigators.

One thing has not changed: the timestamp format. Unless otherwise noted, all timestamps discussed are in Unix Epoch.

The first thing I noticed is that the root level has some new additions (along with some familiar faces). The folders that appear to be new are “app_textures”, “lib”, and “no_backup.” See Figure 1.

Figure 1. Root level of the com.snapchat.android folder.

The first folder that may be of interest is one that has been of interest to forensicators and investigators since the beginning: “databases.” The first database of interest is “main.db.” This database replaces tcspahn.db as it now contains a majority of user data (again, tcspahn.db does not exist anymore). There is quite a bit in here, but I will highlight a few tables. The first table is “Feed.” See Figure 2.

Figure 2. The Feed.

This table contains the last action taken in the app. Specifically, the parties involved in that action (seen in Figure 2), what the action was, and when the action was taken (Figure 3). In Figure 4 you can even see which party did what. The column “lastReadTimestamp” is the absolute last action, and the column “lastReader” show who did that action. In this instance, I had sent a chat message from Fake Account 1 (“thisisdfir”) to Fake Account 2 (“hickdawg957”) and had taken a screenshot of the conversation using Fake Account 1. Fake Account 2 then opened the message.

Enter aFigure 3. Last action. caption

Figure 4. Who did what?
The second table is “Friend.” This table contains anyone who may be my friend. The table contains the other party’s username, user ID, display name, the date/time I added that person as a friend (column “addedTimestamp”), and the date/time the other person added me as a friend (column “reverseAddedTimestamp”). Also seen is any emojis that may be assigned to my friends. See Figures 5, 6, and 7.

Figure 5. Username, User ID, & Display Name.
Figure 6. Friendmojis (Emojis added to my Friends.

Figure 7. Timestamps for when I added friends and when they added me.

Note that the timestamps are for when I originally added the friend/the friend added me. The timestamps here translate back to dates in November of 2018, which is when I originally created the accounts during the creation of my Android Nougat image.

One additional note here. Since everyone is friends with the “Team Snapchat” account, the value for that entry in the “addedTimestamp” column is a good indicator of when the account you’re examining was created.

The next table is a biggie: Messages. I will say that I had some difficulty actually capturing data in this table. The first two attempts involved sending a few messages back and forth, letting the phone sit for a 10 or so minutes, and then extracting the data. In each of those instances, absolutely NO data was left behind in this table.

In order to actually capture the data, I had to leave the phone plugged in to the laptop, send some messages, screenshot the conversation quickly, and then boot into TWRP, which all happened in under two minutes time. If Snapchat is deleting the messages from this table that quickly, they will be extremely hard to capture in the future.

Figure 8 is a screenshot of my conversation (all occurred on 05/30/2019) taken with Fake Account 1 (on the test phone) and Figure 9 shows the table entries. The messages on 05/30/2019 start on Row 6.

Figure 8. A screenshot of the conversation.

Figure 9. Table entries of the conversation.

The columns “timestamp” and “seenTimestamp” are self-explanatory. The column “senderId” is the “id” column from the Friends table. Fake Account 1 (thisisdfir) is senderId 2 and Fake Account 2 (hickdawg957) is senderId 1. The column “feedRowId” tells you who the conversation participants are (beyond the sender). The values link back to the “id” column in the Feed table previously discussed. In this instance, the participants in the conversation are hickdawg957 and thisisdifr.

In case you missed it, Figure 8 actually has two saved messages between these two accounts from December of 2018. Information about those saved messages appear in Rows 1 and 2 in the table. Again, these are relics from previous activity and were not generated during this testing. This is an interesting find as I had completely wiped and reinstalled Android multiple times on this device since the those messages were sent, which leads me to speculate these messages may be saved server-side.

In Figure 10, the “type” column is seen. This column shows the type of message was transmitted. There are three “snap” entries here, but, based on the timestamps, these are not snaps that I sent or received during this testing.

Figure 10. The “types” of messages.
After the “type” column there is a lot of NULL values in a bunch of columns, but you eventually get to the message content, which is seen in Figure 11. Message content is stored as blob data. You’ll also notice there is a column “savedStates.” I am not sure exactly what the entries in the cells are referring to, but they line up with the saved messages.

Figure 11. Message (blob) content.

In Figure 12, I bring up one of the messages that I recently sent.

Figure 12. A sample message.

The next table is “Snaps.” This table is volatile, to say the least. The first data extraction I performed was on 05/22/2019 around 19:00. However, I took multiple pictures and sent multiple snaps on 05/21/2019 around lunch time and the following morning on 05/22/2019. Overall, I sent eight snaps (pictures only) during this time. Figure 13. Shows what I captured during my first data extraction.

Figure 13. I appear to be messing some snaps.
Of the eight snaps that I sent, only six appear in the table. The first two entries in the table pre-date when I started the testing (on 05/21/2019), so those entries are out (they came from Team Snapchat). The first timestamp is from the first snap I sent on 05/22/2019 at 08:24. The two snaps from 05/21/2019 are not here. So, within 24 hours, the data about those snaps had been purged.

On 05/25/2019 I conducted another data extraction after having received a snap and sending two snaps. Figure 14 shows the results.

Figure 14. A day’s worth of snaps.
The entries seen in Figure 13 (save the first two) are gone, but there are two entries there for the snaps I sent. However, there is no entry for the snap I received. I checked all of the tables and there was nothing. I received the snap at 15:18 that day, and performed the extraction at 15:51. Now, I don’t know for sure that a received snap would have been logged. I am sure, however, that it was not there. There may be more testing needed here.

Figure 15 shows the next table, “SendToLastSnapRecipients.” This table shows the user ID of the person I last sent a snap to in the “key” column, and the time at which I sent said snap.

Figure 15. The last snap recipient.

MEMORIES

During the entire testing period I took a total of 13 pictures. Of those 13, I saved 10 of them to “Memories.” Memories is Snapchat’s internal gallery, separate from the phone’s Photos app. After taking a picture and creating an overlay (if desired), you can choose to save the picture, which places it in Memories. If you were to decide to save the picture to your Photos app, Snapchat will allow you to export a copy of the picture (or video).

And here is a plus for examiners/investigators: items placed in Memories are stored server-side. I tested this by signing into Fake Account 1 from an iOS device, and guess what…all of the items I placed in Memories on the Pixel 3 appeared on the iOS device.

Memories can be accessed by swiping up from the bottom of the screen. Figure 16 shows the Snapchat screen after having taken a photo but before snapping (sending) it. Pressing the area in the blue box (bottom left) saves the photo (or video) to Memories. The area in the red box (upper right) are the overlay tools.

Figure 16. The Snapchat screen.

Figure 17 shows the pictures I have in my Memories. Notice that there are only 9 pictures (not 10). More on that in a moment.

Figure 17. My memories. It looks like I am short one picture.

The database memories.db stores relevant information about files that have been saved to Memories. The first table of interest is “memories_entry.” This table contains an “id,” the “snap_id,” and the date the snap was created. There are two columns regarding the time: “created_time” and “latest_created_time.” In Figure 18 there is a few seconds difference between the values in some cells in the two columns, but there are also a few that are the same value. In the cells where there are differences, the differences are negligible.

There is also a column titled “is_private” (seen in Figure 19). This column refers to the My Eyes Only (MEO) feature, which I will discuss shortly. For now, just know that the value of 1 indicates “yes.”

Figure 18. Memories entries.

Figure 19. My Eyes Only status.

(FOR) MY EYES ONLY

I have been seeing a lot of listserv inquires as of late regarding MEO. Cellebrite recently added support for MEO file recovery in Android as of Physical Analyzer 7.19 (iOS to follow), and, after digging around in the memories database, I can see why this would be an issue.

MEO allows a user to protect pictures or videos with a passcode; this passcode is separate from the user’s password for their Snapchat account. A user can opt to use a 4-digit passcode, or a custom alphanumeric passcode. Once a user indicates they want to place a media file in MEO, that file is moved out of the Memories area into MEO (it isn’t copied to MEO).

MEO is basically a private part of Memories. So, just like everything else in Memories, MEO items are also stored server-side. I confirmed this when I signed in to Fake Account 1 from the iOS device; the picture I saved to MEO on the Pixel 3 appeared in MEO on the iOS device. The passcode was the same, too. Snapchat says if a user forgets the passcode to MEO, they cannot help recover it. I’m not sure how true that is, but who knows.

If you recall, I placed 10 pictures in Memories, but Figure 17 only showed 9 pictures. That is because I moved one picture to MEO. Figure 20 shows my MEO gallery.

Figure 20. MEO gallery.

In the memories database, the table “memories_meo_confidential” contains entries about files that have been placed in MEO. See Figure 21.

Figure 21. MEO table in the memories database.

This table contains a “user_id,” the hashed passcode, a “master_key,” and the initialization vector (“iv”). The “master_key” and “initialization vector” are both stored in base64. And, the passcode….well, it has been hashed using bcrypt (ugh). I will add that Cellebrite reports Physical Analyzer 7.19 does have support for accessing MEO files, and, while I did have access to 7.19, I was not able to tell if it was able to access my MEO file since it failed to extract any Snapchat data.

The “user_id” is interesting: “dummy.” I have no idea what that is referring to, and I could not find it anywhere else in the data I extracted.

The next table is “memories_media.” This table. Does have a few tidbits of interesting data: another “id,” the size of the file (“size”), and what type of file (“format”). Since all of my Memories are pictures, all of the cells show “image_jpeg.” See Figures 22 and 23.

Figure 22. “memories_media.”

Figure 23. “memories_media,” part 2.

The next table is “memories_snap.” This table has a lot of information in about my pictures, and brings together data from the other tables in this database. Figure 24 shows a column “media_id,” which corresponds to the “id” in the “memories_media” table discussed earlier. There is also a “creation_time” and “time_zone_id” column. See Figure 24.

Figure 24. id, media_id, creation_time, and time zone.

Figure 25 shows the width and height of the pictures. Also note the column “duration.” The value is 3.0 for each picture. I would be willing to be that number could be higher or lower if the media were videos.

Figure 25 also shows the “memories_entry_id,” which corresponds to the “id” column in the “memories_entry” table. There is also a column for “has_location.” Each of the pictures I placed in Memories has location data associated with it (more on that in a moment).

Figure 25. Picture size, another id, and a location indicator.

Figure 26 is interesting as I have not been able to find the values in the “external_id” or “copy_from_snap_id” columns anywhere.

Figure 26. No clue here.

The data seen in Figure 27 could be very helpful in situations where an examiner/investigator thinks there may be multiple devices in play. The column “snap_create_user_agent” contains information on what version of Snapchat created the the snap, along with the Android version and, in my case, my phone model.

Figure 27. Very helpful.

The column “snap_capture_time” is the time I originally took the picture and not the time I sent the snap.

Figure 28 shows information about the thumbnail associated with each entry.

Figure 28. Thumbnail information.

Figure 29 is just like Figure 27 in its level of value. It contains latitude and longitude of the device when the picture was taken. I plotted each of these entries and I will say that the coordinates are accurate +/- 10 feet. I know the GPS capabilities of every device is different, so just be aware that your mileage may vary.

Figure 29. GPS coordinates!!

Figure 29 also has the column “overlay_size.” This is a good indication if a user has placed an overlay in the picture/video. Overlays are things that are placed in a photo/video after it has been captured. Figure 30 shows an example of an overlay (in the red box). The overlay here is caption text.

Figure 30. An overlay example.

If the value in the overlay_size column is NULL that is a good indication that no overlay was created.

Figure 31 shows the “media_key” and “media_iv,” both of which are in base64. Figure 32 shows the “encrypted_media_key” and “encrypted_media_iv” values. As you can see there is only one entry that has values for these columns; that entry is the picture I placed in MEO.

Figure 31. More base64.

Figure 32. Encrypted stuff.

The next table that may be of interest is “memories_remote_operation.” This shows all of the activity taken within Memories. In the “operation” column, you can see where I added the 10 pictures to Memories (ADD_SNAP_ENTRY_OPERATION). The 11th entry, “UPDATE_PRIVATE_ENTRY_OPERATION,” is where I moved a picture into MEO. See Figure 33.

Figure 33. Remote operations.

The column “serialized_operation” stores information about the operation that was performed. The data appears to be stored in JSON format. The cell contains a lot of the same data that was seen in the “memories_snap” table. I won’t expand it here, but DB Browser for SQLite does a good job of presenting it.

Figure 34 shows a better view of the column plus the “created_timestamp” column. This is the time of when the operation in the entry was performed.

Figure 34. JSON and a timestamp for the operation.

Figure 35 contains the “target_entry” column. The values in these columns refer to the “id”column in the “memories_entry” table.

Figure 35. Operation targets.

To understand the next database, journal, I first have to explain some additional file structure of the com.snapchat.android folder. If you recall all the way back to Figure 1, there was a folder labeled “files.” Entering that folder reveals the folders seen in Figure 36. Figure 37 shows the contents of the “file_manager” folder.

Figure 36. “Files” structure.

Figure 37. file_manager.

The first folder of interest here is “media_package_thumb,” the contents of which can be seen in Figure 38.

Figure 38. Thumbnails?

Examining the first file here in hex finds a familiar header: 0xFF D8 FF E0…yoya. These things are actually JPEGs. So, I opened a command line in the folder, typed ren *.* *.jpg and BAM: pictures! See Figure 39.

Figure 39. Pictures!

Notice there are a few duplications here. However, there are some pictures here that were not saved to memories and were not saved anywhere else. As an example, see the picture in Figure 40.

Figure 40. A non-saved, non-screenshot picture.
Figure 40 is a picture of the front of my employer’s building. For documentation purposes, I put a text overlay in the picture with the date/time I took it (to accompany my notes). I then snapped this picture to Fake Account 2, but did not save it to Memories, did not save it to my Photos app, and did not screenshot it. However, here it is, complete with the overlay. Now, while this isn’t the original picture (it is a thumbnail) it can still be very useful; one would need to examine the “snap” table in the main database to see if there was any activity around the MAC times for the thumbnail.

The next folder of interest is the “memories_media” folder. See Figure 41.

Figure 41. Hmm…

There are 10 items here. These are also JPEGs. I performed the same operation here as I did in the “media_package_thumb” folder and got the results seen in Figure 42.

Figure 42. My Memories, sans overlays.

These are the photographs I placed in Memories, but the caption overlays are missing. The picture that is MEO is also here (the file staring with F5FC6BB…). Additionally, these are high resolution pictures.

You may be asking yourself “What happened to the caption overlays?” I’m glad you asked. They are stored in the “memories_overlay” folder. See Figure 43.

Figure 43. My caption overlays.

Just like the previous two folders, these are actually JPEGs. I performed the rename function, and got the results seen in Figure 44. Figure 45 shows the overlay previously seen in Figure 30.

Figure 44. Overlays.

Figure 45. The Megaman overlay from Figure 30.

The folder “memories_thumbnail” is the same as the others, except it contains just the files in Memories (with the overlays). For brevity’s sake, I will just say the methodology to get the pictures to render is the same as before. Just be aware that while I just have pictures in my Memories, a user could put videos in there, too, so you could have a mixture of media. If you do a mass-renaming, and a file does not render, the file extension is probably wrong, so adjust the file extension(s) accordingly.

Now that we have discussed those file folders, let’s get back to the journal database. This database keeps track of everything in the “file_manager” directory, including those things we just discussed. Figure 46 shows the top level of the database’s entries.

Figure 46. First entries in the journal database.

If I filter the “key” column using the term “package” from the “media_package_thumb” folder (the “media_package_thumb.0” files) I get the results seen in Figure 47.

Figure 47. Filtered results.

The values in the “key” column are the file names for the 21 files seen in Figure 38. The values seen in the “last_update_time” column are the timestamps for when I took the pictures. This is a method by which examiners/investigators could potentially recover snaps that have been deleted.

WHAT ELSE IS THERE?

As it turns out, there are a few more, non-database artifacts left behind which are located in the “shared_prefs” folder seen in Figure 1. The contents can be seen in Figure 48.

Figure 48. shared_prefs contents.

The first file is identity_persistent_store.xml seen in Figure 49. The file contains the timestamp for when Snapchat was installed on the device (INSTALL_ON_DEVICE_TIMESTAMP), when the first logon occurred on the device (FIRST_LOGGED_IN_ON_DEVICE_TIMESTAMP), and the last user to logon to the device (LAST_LOGGED_IN_USERNAME).

Figure 49. identity_persistent_store.xml.

Figure 50. shows the file LoginSignupStore.xml. it contains the username that is logged in.

Figure 50. Who is logged in?

The file user_session_shared_pref.xml has quite a bit of account data in it, and is seen in Figure 51. For starters, it contains the display name (key_display_name), the username (key_username), and the phone number associated with the account (key_phone).

The value “key_created_timestamp” is notable. This time stamp converts to November 29, 2018 at 15:13:34 (EST). Based on my notes from my Nougat image, this was around the time I established Fake Account 1, which was used in the creation of the Nougat image. This might be a good indicator of when the account was established, although, you could always get that data from serving Snapchat with legal process.

Rounding it out is the “key_user_id” (seen in the Friends table of the main database) and the email associated with the account (key_email).

Figure 51. user_session_shared_pref.xml

CONCLUSION

Snapchat’s reputation proceeds it very well. I have been in a few situations where examiners/investigators automatically threw up their hands and gave up after having been told that potential evidence was generated/contained in Snapchat. They wouldn’t even try. I will say that while I always have (and will) try to examine anything regardless of what the general concensus is, I did share a bit of others’ skepticism about the ability to recover much data from Snapchat. However, this exercise has shown me that there is plenty of useful data left behind by Snapchat that can give a good look into its usage.

Update

Alexis Brignoni over at Initialization Vectors noticed that I failed to address something in this post. First, thanks to him for reading and contacting me. 🙂 Second, he noticed that I did not address Cellebrite Physical Analyzer’s (v 7.19) and Axiom’s (v 3.0) ability to parse my test Snapchat data (I addressed the extraction portion only).

We both ran the test data against both tools and found both failed to parse any of the databases. Testing found that while Cellebrite found the pictures I describe in this post, it did not apply the correct MAC times to them (from the journal.db). Axiom failed to parse the databases and failed to identify any of the pictures.

This is not in any way shape or form a knock on or an attempt to single out these two tools; these are just the tools to which I happen to have access. These tools work, and I use them regularly. The vendors do a great job keeping up with the latest developments in both the apps and the operating systems. Sometimes, though, app developers will make a hard turn all of a sudden, and it does take time for the vendors to update their tools. Doing so requires R&D and quality control via testing, which can take a while depending on the complexity of the update.

However, this exercise does bring to light an important lesson in our discipline, one that bears repeating: test and know the limitations of your tools. Knowing the limitations allows you to know when you may be missing data/getting errant readings. Being able to compensate for any shortcomings and manually examine the data is a necessary skillset in our discipline.

Thank you Alexis for the catch and assist!

Ridin’ With Apple CarPlay

I have been picking on Google lately.  In fact, all of my blog posts thus far have focused on Google things.  Earlier this year I wrote a blog about Android Auto, Google’s solution for unifying telematic user interfaces (UIs), and in it I mentioned that I am a daily CarPlay driver.  So, in the interest of being fair, I thought I would pick on Apple for a bit and take a look under the hood of CarPlay, Apple’s foray into automotive telematics.

Worldwide, 62 different auto manufacturers make over 500 models that support CarPlay.  Additionally, 6 after-market radio manufacturers (think Pioneer, Kenwood, Clarion, etc.) support CarPlay.  In comparison, 41 auto manufacturers (again, over 500 models – this is an increase since my earlier post) and 19 after-market radio manufacturers support Android Auto.  CarPlay runs on iPhone 5 and later.  It has been a part of iOS since its arrival (in iOS 7.1), so there is no additional app to download (unlike Android Auto).  A driver simply plugs the phone into the car (or wirelessly pairs it if the car supports it) and drives off; a wired connection negates the need for a Bluetooth connection.  The toughest thing about CarPlay setup is deciding how to arrange the apps on the home screen.

In roughly 5 years’ time CarPlay support has grown from 3 to 62 different auto manufacturers.  I can remember shopping for my 2009 Honda (in 2012) and not seeing anything mentioned about hands-free options.  Nowadays, support for CarPlay is a feature item in a lot of car sales advertisements.  With more and more states enacting distracted driving legislation, I believe using these hands-free systems will eventually become mandatory.

Before we get started, let’s take a look at CarPlay’s history.

Looking in the Rearview Mirror

The concept of using an iOS device in a car goes back further than most people realize.  In 2010 BMW announced support for iPod Out, which allowed a driver to use their iPod via an infotainment console in select BMW & Mini models.

iPod Out-1
Figure 1.  iPod Out.  The great-grandparent of CarPlay.

iPod Out-2
Figure 2.  iPod Out (Playback).

The iPod connected to the car via the 30-pin to USB cable, and it would project a UI to the screen in the car.  iPod Out was baked in to iOS 4, so the iPhone 3G, 3GS, 4, and the 2nd and 3rd generation iPod Touches all supported it.  While BMW was the only manufacturer to support iPod Out, any auto manufacturer could have supported it; however, it just wasn’t widely advertised or adopted.

In 2012 Siri Eyes Free was announced at WWDC as part of iOS 6.  Siri Eyes Free would allow a user to summon Siri (then a year old in iOS) via buttons on a steering wheel and issue any command that one could normally issue to Siri.  This differed from iPod Out in that there was no need for a wired-connection.  The car and iOS device (probably a phone at this point) utilized Bluetooth to communicate.  The upside to Siri Eyes Free, beyond the obvious safety feature, was that it could work with any in-car system that could utilize the correct version of the Bluetooth Hands-Free Profile (HFP).  No infotainment center/screen was necessary since it did not need to project a UI.  A handful of auto manufacturers signed on, but widespread uptake was still absent.

At the 2013 WWDC Siri Eyes Free morphed in to iOS in the Car, which was part of iOS 7.  iOS in the Car can be thought of as the parent of CarPlay, and closely resembles what we have today.  There were, however, some aesthetic differences, which can be seen below.

HomeScreen
Figure 3.  Apple’s Eddy Cue presenting iOS in the Car (Home screen).

iOS-in-the-Car-integration-Chevy-Spark-MyLink-720x340
Figure 4.  Phone call in iOS in the Car.

dims
FIgure 5.  Music playback in iOS in the Car.

Screen Shot 2013-06-10 at 12.59.52 PM
Figure 6.  Getting directions.

Screen Shot 2013-06-10 at 2.09.12 PM
Figure 7.  Navigation in iOS in the Car.

iOS in the Car needed a wired connection to the vehicle, or so was the general thought at the time.  During the iOS 7 beta, switches were found indicating that iOS in the Car could, potentially, operate over a wireless connection, and there was even mention of it possibly leveraging AirPlay (more on that later in this post).  Unfortunately, iOS in the Car was not present when iOS 7 was initially released.

The following spring Apple presented CarPlay, and it was later released in iOS 7.1.  At launch there were three auto manufactures that supported it:  Ferrari, Mercedes-Benz, and Volvo.  Personally, I cannot afford cars from any of those companies, so I am glad more manufacturers have added support.

CarPlay has changed very little since its release.  iOS 9 brought wireless pairing capabilities to car models that could support it, iOS 10.3 added recently used apps to the upper left part of the screen, and iOS 12 opened up CarPlay to third party navigation applications (e.g. Google Maps and Waze).  Otherwise, CarPlay’s functionality has stayed the same.

With the history lesson now over, there are a couple of things to mention.  First, this research was conducted using my personal phone, an iPhone XS (model A1920) running iOS 12.2 (build 16E227).  So, while I do have data sets, I will not be posting them online as I did with the Android Auto data.  If you are interested in the test data, contact me through the blog site and we’ll talk.

Second, at least one of the files discussed (the cache file in the locationd path) is in a protected area of iPhone, so there are two ways you can get to it:  jailbreaking iPhone or using a “key” with a color intermediate between black and white. The Springboard and audio data should be present in an iTunes backup or in an extraction from your favorite mobile forensic tool.

Let’s have a look around.

Test Drive

I have been using CarPlay for the past two and a half years.  A majority of that time was with an after-market radio from Pioneer (installed in a 2009 Honda), and the last six months have been with a factory-installed display unit in a 2019 Nissan.  One thing I discovered is that there are some slight aesthetic differences in how each auto manufacturer/after-market radio manufacturer visually implements CarPlay, so your visual mileage may vary.  However, the functionality is the same across the board.  CarPlay works just like iPhone.

Figure 8 shows the home screen of CarPlay.

IMG_0769 2
Figure 8.  CarPlay’s home screen.

The home screen looks and operates just like iPhone, which was probably the idea.  Apple did not want users to have a large learning curve when trying to use CarPlay.  Each icon represents an app, and the apps are arranged in rows and columns.  Unlike iPhone, creating folders is not an option, so it is easy to have multiple home screens. The icons are large enough to where not much fine motor skill is necessary to press one, which means you probably won’t be hunting for or pressing the wrong app icon very often.

The button in the orange box is the home button.  It is persistent across the UI, and it works like the iPhone home button:  press it while anywhere and you are taken back to the home screen.  The area in the blue box indicates there are two home screens available, and the area in the red box shows the most recently used apps.

Most of the apps should be familiar to iPhone users, but there is one that is not seen on iPhone:  the Now Playing app.  This thing is not actually an app…it can be thought of more like a shortcut.  Pressing it will bring up whatever app currently has control of the virtual sound interface of CoreAudio (i.e. whatever app is currently playing or last played audio if that app is suspended in iPhone’s background).

Swiping left, shows my second home screen (Figure 9).  The area in the red box is the OEM app.  If I were to press it, I would exit the CarPlay UI and would return to Nissan Connect (Nissan’s telematic system); however, CarPlay is still running in the background.  The OEM app icon will change depending on the auto maker.  So, for example, if you were driving a Honda, this icon would be different.

IMG_0771 1.jpg
Figure 9.  The second batch of apps on the second home screen.

A user can arrange the apps any way they choose and there are two ways of doing this, both of which are like iPhone.  The first way is to press and hold an app on the car display unit, and then drag it to its desired location.  The second way is done from the screen seen in Figure 10.

IMG_0801.JPG
Figure 10.  CarPlay settings screen.

The screen in Figure 10 can be found on iPhone by navigating to Settings > General > CarPlay and selecting the CarPlay unit (or units – you can have multiple)…mine is “NissanConnect.”  Moving apps arounds is the same here as it is on the display unit (instructions are present midway down the screen).  Apps that have a minus sign badge can be removed from the CarPlay home screen.  When an app is removed it is relegated to the area just below the CarPlay screen; in Figure 10 that area holds the MLB AtBat app, AudioBooks (iBooks), and WhatsApp.  If I wanted to add any relegated apps to the CarPlay home screen I could do so by pushing the plus sign badge.  Some apps cannot be relegated:  Phone, Messages, Maps, Now Playing, Music, and the OEM app.  Everything else can be relegated.

One thing to note here.  iOS considers the car to be a USB accessory, so CarPlay does have to abide by the USB Restricted Mode setting on iPhone (if enabled).  This is regardless of whether the Allow CarPlay While Locked toggle switch is set to the on position.

The following screenshots show music playback (Figure 11), navigation (Figure 12), and podcast playback (Figure 13).

IMG_0796.PNG
Figure 11.  Music playback.

IMG_0782.PNG
Figure 12.  Navigation in CarPlay.

IMG_0794.PNG
Figure 13.  Podcast playback.

Messages in CarPlay is a stripped-down version of Messages on iPhone.  The app will display a list of conversations (see Figure 14), but it will not display text of the conversations (Apple obviously doesn’t want a driver reading while driving).  Instead, Siri is used for both reading and dictating messages.

IMG_0792.jpg
Figure 14.  Messages conversation list.

Phone is seen in Figures 15; specifically, the Favorites tab.  The tabs at the top of the screens mirror those that are seen on the bottom in the Phone app on iPhone (Favorites, Recents, Contacts, Keypad, and Voicemail).  Those tabs look just like those seen in iPhone.

IMG_0790
Figure 15.  Phone favorites.

IMG_0805
Figure 16.  The keypad in Phone.

If I receive a phone call, I can answer it in two ways:  pressing the green accept button (seen in Figure 17) or pushing the telephone button on my steering wheel.  Answering the call changes the screen to the one seen in Figure 18.  Some of the items in Figure 18 look similar to those seen in iOS in the Car (Figure 4).

IMG_0807
Figure 17.  An incoming call.

IMG_0809
Figure 18.  An active phone call.

Most apps will appear like those pictured above, although, there may be some slight visual/functional differences depending on the app’s purpose, and, again, there may be some further visual differences depending on what car or after-market radio you are using.

Speaking of purpose, CarPlay is designed to do three things:  voice communication, audio playback, and navigation.  These things can be done fairly well through CarPlay, and done safely, which, I believe, is the main purpose.  Obviously, some popular apps, such as Twitter or Facebook, don’t work well in a car, so I don’t expect true social media apps to be in CarPlay any time soon if at all (I could be wrong).

Now that we have had a tour, let’s take a look under the hood and see what artifacts, if any, can be found.

Under the Hood

After snooping around in iOS for a bit I came to a realization that CarPlay is forensically similar to Android Auto:  it merely projects the apps that can work with it on to the car’s display unit, so the individual apps contain a majority of the user-generated data.  Also, like Android Auto, CarPlay does leave behind some artifacts that may be valuable to forensic examiners/investigators,  and, just like any other artifacts an examiner may find, these can be used in conjunction with other data sources to get a wholistic picture of a device.

One of the first artifacts that I found is the cache.plist file under locationd.  It can be found in the private > var > root > Library > Caches > locationd path.  cache.plist contains the times of last connect and last disconnect.  I did not expect to find connection times in the cache file of the location daemon, so this was a pleasant surprise.  See Figure 19.

LastVehicleConnection.jpg
Figure 19.  Last connect and last disconnect times.

There are actually three timestamps here, two of which I have identified.  The timestamp in the red box is the last time I connected to my car. It is stored in CF Absolute Time (aka Mac Absolute Time), which is the number of seconds since January 1, 2001 00:00:00 UTC.  The time, 576763615.86389804, converts to April 12, 2019 at 8:06:56 AM (EDT).  I had stopped at my favorite coffee shop on the way to work and when I hopped back in the car, I plugged in my iPhone and CarPlay initialized.  See Figure 20.

LastConnectTime
Figure 20.  Time of last connect.

The time stamp in the green box just under the string CarKit NissanConnect, is a bit deceptive.  It is the time I disconnected from my car.  Decoding it converts it to April 12, 2019 at 8:26:18 AM (EDT).  Here, I disconnected from my car, walked into work, and badged in at 8:27:14 AM (EDT).  See Figure 21.

LastDisconnectTime
Figure 21.  Time of last disconnect.

The time in the middle, 576764725.40157998, is just under a minute before the timestamp in the green box.  Based on my notes, it is the time I stopped playback on a podcast that I was listening to at the time I parked.  I also checked KnowledgeC.db (via DB Browser for SQLite) and found an entry in it for “Cached Locations,” with the GPS coordinates being where I parked in my employer’s parking lot.  Whether the middle timestamp represents the time the last action was taken in CarPlay is a good question and requires more testing.

The next file of interest here is the com.apple.carplay.plist file.  It can be found by navigating to the private > var > mobile > Library > Preferences path.  See Figure 22.

CarPlay-Plist
Figure 22.  carplay.plist

The area in the red box is of interest.  Here the name of the car that was paired is seen (NissanConnect) along with a GUID.  The fact that the term “pairings” (plural) is there along with a GUID leads me to believe that multiple cars can be paired with the same iPhone, but I wasn’t able to test this as I am the only person I know that has a CarPlay capable car.  Remember the GUID because it is seen again in discussing the next artifact.  For now, see Figure 23.

IMG_0802.JPG
Figure 23.  Main CarPlay setting page in iOS.

Figure 23 shows the settings page just above the one seen in Figure 10.  I show this merely to show that my car is labeled “NissanConnect.”

The next file is 10310139-130B-44F2-A862-7095C7AAE059-CarDisplayIconState.plist.  It can be found in the private > var > mobile > Library > Springboard path.  The first part of the file name should look familiar…it is the GUID seen in the com.apple.carplay.plist file.  This file describes the layout of the home screen (or screens if you have more than one).  I found other files in the same path with the CarDisplayIconState string in their file names, but with different GUIDs, which causes me to further speculate that multiple CarPlay units can be synced with one iPhone.  See Figure 24.

IconList-Plist-1
Figure 24.  CarPlay Display Icon State.

The area in the red and blue boxes represent my home screens.  The top-level Item in the red box, Item 0, represents my first home screen, and the sub-item numbers represent the location of each icon on the first home screen.  See Figure 25 for the translation.

IMG_0769
Figure 25.  Home screen # 1 layout.

The area in the blue box in Figure 24 represents my second home screen, and, again, the sub-item numbers represent the location of each icon on the screen.  See Figure 26 for the translation.

IMG_0771
Figure 26.  Home screen # 2 layout.

The entry below the blue box in Figure 24 is labeled “metadata.”  Figure 27 shows it in an expanded format.

IconList-Plist-2
Figure 27.  Icon state “metadata.”

The areas in the green and purple boxes indicate that the OEM app icon is displayed, and that it is “Nissan” (seen in Figure 26).  The areas in the orange and blue boxes describe how the app icon layout should be (four columns and two rows).  The area in the red box is labeled “hiddenIcons,” and refers to the relegated apps previously seen in Figure 10.  As it turns out, the items numbers also describe their position.  See Figure 28.

IMG_0801
Figure 28.  Hidden icon layout.

Notice that this file did not describe the location of the most recently used apps in CarPlay (the area in the upper left portion of the display screen).  That information is described in com.apple.springboard, which is found in the same path.  See Figure 29.

RecentlyUsedLayout
Figure 29.  Springboard and most recently used apps.

Just like the app icon layout previously discussed, the item numbers for each most recently used app translate to positions on the display screen.  See Figure 30 for the translation.

IMG_0769 1
Figure 30.  Most recently used apps positions.

The next file is the com.apple.celestial.plist, which is found in the private > var > mobile > Library > Preferences path.  This file had a bunch of data in it, but there are three values in this file that are relevant to CarPlay.  See Figure 31.

Celestial.JPG
Figure 31.  Celestial.

The string in the green box represents what app had last played audio within CarPlay prior to iPhone being disconnected from the car.  The area in blue box is self-explanatory (I had stopped my podcast when I parked my car).  The item in the red box is interesting.  I had been playing a podcast when I parked the car and had stopped playback.  Before I disconnected my iPhone, I brought the Music app to the foreground, but did not have it play any music, thus it never took control of the virtual sound interface in CoreAudio. By doing this, the string in the red box was generated.  Just to confirm this, I tested this scenario a second time, but did not bring the Music app to the foreground; the value nowPlayingAppDisplayIDUponCarPlayDisconnect was not present in the second plist file.  I am sure this key has some operational value, although I am not sure what that value is.  If anyone has any idea, please let me know.

As I mentioned earlier in this post, Siri does a lot of the heavy lifting in CarPlay because Apple doesn’t want you messing with your phone while you’re driving.  So, I decided to look for anything Siri-related, and I did find one thing…although I will say that this  is probably not exclusive to CarPlay.  I think this may be present regardless of whether it occurs in CarPlay or not (more testing).  In the path private > var > mobile > Library > Assistant there is a plist file named PreviousConversation (there is no file extension but the file header indicates it is a bplist).  Let me provide some context.

When I pick up my child from daycare in the afternoons, I will ask Siri to send a message, via CarPlay, to my spouse indicating that my child and I are on the way home, and she usually acknowledges.  The afternoon before I extracted the data from my iPhone (04/11/2019), I had done just that, and, after a delay, my spouse had replied “Ok.”

PreviousConversation contains the last conversation I had with Siri during this session. When I received the message, I hit the notification I received at the top of the CarPlay screen, which triggered Siri.  The session went as so:

Siri:                 “[Spouse’s name] said Ok.  Would you like to reply?”

Me:                  “No.”

Siri:                 “Ok.”

See Figure 32.

IncomingMessage.JPG
FIgure 32.  Session with Siri.

The area in the red box is the name of the sender, in this case, my spouse’s (redacted) name.  The orange box was spoken by Siri, and the blue box is the actual iMessage I received from my spouse.  The purple box is what was read to me, minus the actual iMessage.  Siri’s inquiry (about my desire to reply) is seen in Figure 33.

WouldYouLikeToReply.PNG
Figure 33.  Would you like to reply?

Figure 34 contains the values of the message sender (my spouse).  Inside of the red box the field “data” contains the iMessage identifier…in this case, my spouse’s phone number.  The field “displayText” is my spouse’s name (presumably pulled from my Contact’s list).  Figure 35 has the message recipient information:  me.

MessageSender.PNG
Figure 34.  Message sender.

MessageRecipient.PNG
Figure 35.  Message recipient (me) plus timestamp.

Figure 35 also has the timestamp of when the message was received (orange box), along with my spouse’s chat identifier (blue box).

Siri-OK.PNG
Figure 36.  Siri’s response.

Figure 36 shows Siri’s last response to me before the session ended.

Interesting note:  this plist file had other interesting data in it.  One thing that I noticed is that each possible response to the inquiry “Would you like to reply?” had an entry in here:  “Call” (the message sender), “Yes” (I’d like to reply), and “No” (I would not like to reply).  It might be a good research project for someone.  🙂

The next artifact actually comes from a file previously discussed:  com.apple.celestial.plist.  While examining this file I found something interesting that bears mentioning in this post.  My iPhone has never been paired via Bluetooth with my 2019 Nissan.  When I purchased the car, I immediately started using CarPlay, so there has been no need to use Bluetooth (other than testing Android Auto).  Under the endointTypeInfo key I found the area seen in Figure 37.

CarBT.jpg
Figure 37.  What is this doing here?

The keys in the red box contain the Bluetooth MAC address for my car.  I double-checked my Bluetooth settings on the phone and the car, and the car Bluetooth radio was turned off, but the phone’s radio was on (due to my AppleWatch).  So, how does my iPhone have the Bluetooth MAC address for my car?  I do have a theory, so stay with me for just a second.  See Figure 38.

IMG_0814
Figure 38.  AirPlay indicator.

Figure 38 shows the home screen of my iPhone while CarPlay is running.  Notice that the AirPlay/Bluetooth indicator is enabled (red box).  Based on some great reverse engineering, it was found that any device that uses the AirPlay service will use its MAC address in order to identify itself (deviceid).  Now, see Figure 39.

AudioInterfaces
Figure 39. Virtual Audio Interfaces for AirPlay and CarPlay.

Figure 39 shows two files, both of which are in the Library > Audio > Plugins > HAL path.  The file on the left is the info.plist file for the Halogen driver (the virtual audio interface) for AirPlay and the file on the right is the info.plist file for the Halogen driver for CarPlay.  The plug-in identifiers for each (both starting with EEA5773D) are the same.  My theory is that CarPlay may be utilizing AirPlay protocols in order to function, at least for audio.  I know this is a stretch as those of us that use AirPlay know that it typically is done over a wireless connection, but I think there is a small argument to be made here.  Obviously, this requires more research and testing, and it is beyond the scope of this post.

Conclusion

CarPlay is Apple’s attempt at (safely) getting into your car.  It provides a singular screen experience between iPhone and the car, and it encourages safe driving.  While a majority of the user-generated artifacts are kept by the individual apps that are used, there are artifacts specific to CarPlay that are left behind.  The app icon layout, time last connected and disconnected, and last used app can all be found in these artifacts.  There are also some ancillary artifacts that may also be useful to examiners/investigators.

It has been a long time since I really dug around in iOS, and I saw a lot of interesting things that I think would be great to research, so I may be picking on Apple again in the near future.

Android Pie (9.0) Image Is Available. Come Get A Piece!

In continuing the series of created Android images, I’d like to announce an Android Pie (9.0) image is now available for download.   Unfortunately, I had to retire the LG Nexus 5X (it topped out at Oreo), so this time I used a Google Pixel 3. The image contains user-populated data within the stock Android apps and 24 non-stock apps.  It includes some new, privacy-centered messaging apps:  Wickr Me, Silent Phone, and Dust.

As with the Nougat and Oreo images, this one includes robust documentation; however, there are some differences in the files being made available.  First, there is no .ufd file.  Second, there is no takeout data.  It appeared, based on the traffic for the last two images, there was little interest, so I did not get takeout data this time.  If enough interest is expressed, I will get it.

Third…and this is a biggie…there are multiple files.  The first file, sda.bin (contained within sda.7z), is an image of the entire phone.   This file contains all of the partitions of the phone in an unencrypted format…except for the /data partition (i.e. sda21 or Partition 21), which is encrypted. I tried every method I could think of to get a completely unencrypted image, but was unable to do so.  I suspect the Titan M chip may have something to do with this but I need to study the phone and Android Pie further to confirm or disprove.  Regardless, I am including this file so the partition layout and the unencrypted areas can be studied and examined.  I will say there are some differences between Pie’s partition layout and the layout of previous flavors of Android.

The sda.bin file is 64 GBs in size (11 GB compressed), so make sure you have enough room for the decompressed file.

The second file, Google Pixel 3.tar, is the unencrypted /data partition.  Combined with the sda.bin file, you have a complete picture of the phone.

And finally, there is a folder called “Messages,” which contains two Excel spreadsheets that have MMS and SMS messages from the Messages app.  There were way too many messages for me to type out in the documentation this time, so I just exported them to spreadsheets.  I can confirm that both spreadsheets are accurate.

This image is freely available to anyone who wants it for training, education, testing, or research.

Once Android Q gets further along in beta I will began work on an image for it, so, for the time being, this will be it. 🙂

Please note the images and related materials are hosted by Digital Corpora.  You can find everything here.

Google Search Bar & Search Term History – Are You Finding Everything?

Search history.  It is an excellent way to peer into someone’s mind and see what they are thinking at a particular moment in time.  In a court room, search history can be used to show intent (mens rea).  There are plenty of examples where search history has been used in court to establish a defendant’s intent.  Probably the most gruesome was the New York City Cannibal Cop trial, where prosecutors used the accused’s search history against him.  Of course, there is a fine line between intent and protected speech under the First Amendment.

Over the past month and a half I have published a couple of blog posts dealing with Google Assistant and some of the artifacts it leaves behind, which you can find here and here.  While poking around I found additional artifacts present in the same area that have nothing to do with Google Assistant:  search terms.

While I wasn’t surprised, I was; after all, the folder where this data was found had “search” in the title (com.google.android.googlequicksearchbox).  The surprising thing about these search terms is that they are unique to this particular area in Android; they do not appear anywhere else, so it is possible that you or I (or both) could have been missing pertinent artifacts in our examinations (I have missed something).  Conducting a search via this method can trigger Google Chrome to go to a particular location on the Internet, but the term used to conduct the search is missing from the usual spot in the History.db file in Chrome.

My background research on the Google Search Bar (as it is now known) found that this feature may not be used as much as, say, the search/URL bar inside Chrome.  In fact, there are numerous tutorials online that show a user how to remove the Google Search Bar from Android’s Home Screen, presumably to make more space for home screen icons.  I will say, however, that while creating two Android images (Nougat and Oreo), having that search bar there was handy, so I can’t figure out why people wouldn’t use it more.  But, I digress…

Before I get started there are a few things to note.  First, the data for this post comes from two different flavors of Android:  Nougat (7.1.2) and Oreo (8.1).  The images can be found here and here, respectively.  Second, the device used for each image was the same (LG Nexus 5X), and it was rooted both times using TWRP and Magisk.  Third, I will not provide a file structure breakdown here as I did in the Google Assistant blog posts.  This post will focus on the pertinent contents along with content markers within the binarypb files.  I found the binarypb files related to Google Search Bar activity to contain way more protobuff data than those from Google Assistant, so a file structure breakdown is impractical.

Finally, I thought it might be a good idea to give some historical context about this feature by taking a trip down memory lane.

A Quick Background

Back in 2009 Google introduced what, at the time, it called Quick Search Box for Android for Android 1.6 (Doughnut).  It was designed as a place a user could go to type a word or phrase and search not only the local device but also the Internet.  Developers could adjust their app to expose services and content to Quick Search Box so returned results would include their app.  The neat thing about this feature was that it was contextually/location aware, so, for example, I could type the word “weather” and it would display the weather conditions for my current location.  All of this could occur without the need of another app on the phone (depending on the search).

QSB-Doughnut

Google Quick Search Box – circa 2009.

Searching.png

Showtimes…which one do you want?

Prior to Google Assistant, Quick Search Box had a vocal input feature (the microphone icon) that could execute commands (e.g. call Mike’s mobile) and that was about it.  Compared to today this seems archaic, but, at the time, it was cutting edge.

VocalInput.png

Yes, I’m listening.

Fast forward three years to 2012’s Jelly Bean (4.1).  By that time Quick Search Bar (QSB) had been replaced by Google Now, Google’s search and prediction service.  If we were doing Ancestry.com or 23andMe, Google Now would definitely be a genetic relative of Google Search Bar/Google Assistant.  The resemblance is uncanny.

android_41_jelly_bean_ss_08_verge_300.jpg

Mom, is that you?  Google Now in Jelly Bean

The following year, Kit Kat allowed a device to start listening for the hotword “Ok, Google.”  The next big iteration was Now on Tap in 2015’s Marshmallow (6.x), and, with the arrival of Oreo (8.x) we have what we now know today as Google Assistant and the Google Search Bar (GSB).   Recently in Android Pie (9.x) GSB moved from the top part of the home screen to the bottom.

old-navbar-1080x1920

Google Search Bar/Google Assistant at the bottom in Android Pie (9.x).

As of the Fall of 2018 Nougat and Oreo accounted for over half of the total Android install base.  Since I had access to images of both flavors and conducted research on both, the following discussion covers both.  There were a few differences between the two systems, which I will note, but, overall, there was no major divergence.

To understand where GSB lives and the data available, let’s review…

Review Time

GSB and Google Assistant are roommates in both Nougat and Oreo; they both reside in the /data/data directory in the folder com.google.android.googlequicksearchbox.  See Figure 1.

galisting

Figure 1.  GSB & Google Assistant’s home in Android.

This folder holds data about searches that are done from GSB along with vocal input generated by interacting with Google Assistant.  The folder has the usual suspect folders along with several others.  See Figure 2 for the folder listings.

galisting-infile

Figure 2.  Folder listing inside of the googlequicksearchbox folder.

The folder of interest here is app_session.  This folder has a great deal of data, but just looking at what is here one would not suspect anything.  The folder contains several binarypb files, which are binary protocol buffer files.  These files are Google’s home-grown, XML-ish rival to JSON files.  They contain data that is relevant to how a user interacts with their device via Google Assistant and GSB.    See Figure 3.

Figure 3.PNG

Figure 3.  binarypb file (Nougat).

A good deal of the overall structure of these binarypb files differ from those generated by Google Assistant.  I found the GSB binarypb files not easy to read compared to the Google Assistant files.  However, the concept is similar:  there are markers that allow an examiner to quickly locate and identify the pertinent data.

Down in the Weeds

To start, I chose 18551.binarypb in the Nougat image (7.1.2)This search occurred on 11/30/2018 at 03:55 PM (EST).  The search was conducted while the phone was sitting on my desk in front of me, unlocked and displaying the home screen.  The term I typed in to the GSB was “dfir.”  I was presented with a few choices, and then chose the option that took me to the “AboutDFIR” website via Google Chrome.  The beginning of the file appears in Figure 4.

Figure 4.PNG

Figure 4.  Oh hello!

While not a complete match, this structure is slightly similar to that of the Google Assistant binarypb files.  The big takeaway here is the “search” in the blue box.  This is what this file represents/where the request is coming from.  The BNDLs in the red boxes are familiar to those who have read the Google Assistant posts.  While BNDLs are scattered throughout these files, it is difficult to determine where the individual transactions occur within the binarypb files, thus I will ignore them for the remainder of the post.

Scrolling down a bit finds the first area of interest seen in Figure 5.

Figure 5.PNG

Figure 5.  This looks familar.

In the Google Assistant files, there was an 8-byte string that appeared just before each vocal input.  Here there is a four-byte string (0x40404004 – green box) that appears before the search term (purple box).  Also present is a time stamp in Unix Epoch Time format (red box).  The string, 0x97C3676667010000 is read little endian and converted to decimal.  Here, that value is 1543611335575.

Figure 6.PNG

Figure 6.  The results of the decimal conversion.

This time is the time I conducted the search from GSB on the home screen.

Down further is the area seen in Figure 7.   The bit in the orange box looks like the Java wrappers in the Google Assistant files.  The string webj and.gsa.widget.text* search dfir and.gsa.widget.text has my search term “dfir” wrapped in two strings:  “and.gsa.widget.txt.”  Based on Android naming schemas, I believe this to be “Android Google Search Assistant Widget” with text.  This is speculation on my part as I haven’t been able to find anything that confirms or denies this.

Figure 7.PNG

Figure 7.  More search information.

The 4-byte string (green box), my search term (purple box), and the time stamp (red box) are all here.  Additionally, is the string in the blue box.  The string, a 5-byte string 0xBAF1C8F803, is something seen in Google Assistant files.  In the Google Assistant files, this string appeared just prior to the first vocal input in a binarypb file, regardless of when, chronologically, it occurred during the session (remember, the last thing chronologically in the session was the first thing in those binarypb files).  Here, this string occurs at the second appearance of the search term.

Traveling further, I find the area depicted in Figure 8.  This area of the file is very similar to that of the Google Assistant files.

Figure 8.PNG

Figure 8.  A familar layout.

The 16-byte string ending in 0x12 in the blue box is one that was seen in the Google Assistant files.  In those files I postulated this string marked the end of a vocal transaction.  Here, it appears to be doing the same thing.  Just after that, a BNDL appears, then the 4-byte string in the green box, and finally my “dfir” search term (purple box).  Just below this area, in Figure 9, there is a string “android.search.extra.EVENT_ID” and what appears to be some type of identifier (orange box).  Just below that, is the same time stamp from before (red box).

Figure 9.PNG

Figure 9.  An identifier.

I am showing Figure 10 just to show a similarity between GSB and Google Assistant files.  In Google Assistant, there was a 16-byte string at the end of the file that looked like the one shown in Figure 8, but it ended in 0x18 instead of 0x12.  In GSB files, that string is not present.  Part of it is, but not all of it (see the red box).  What is present is the and.gsa.d.ssc. string (blue box), which was also present in Google Assistant files.

Figure 10.PNG

Figure 10.  The end (?).

The next file I chose was 33572.binarypb.  This search occurred on 12/04/2018 at 08:48 AM (EST).  The search was conducted while the phone was sitting on my desk in front of me, unlocked and displaying the home screen.  The term I typed in to the GSB was “nist cfreds.”  I was presented with a few choices, and then chose the option that took me to NIST’s CFReDS Project website via Google Chrome.  The beginning of the file appears in Figure 11.

Figure 11.PNG

Figure 11.  Looks the same.

This looks just about the same as Figure 4.  As before, the pertinent piece is the “search” in the blue box.  Traveling past a lot of protobuff data, I arrive at the area shown in Figure 12.

Figure 12.PNG

Figure 12.  The same, but not.

Other than the search term (purple box) and time stamp (red box) this looks just like Figure 5.  The time stamp converts to decimal 1543931294855 (Unix Epoch Time).  See Figure 13.

Figure 13.PNG

Figure 13.  Looks right.

As before, this was the time that I had conducted the search in GSB.

Figure 14 recycles what was seen in Figure 7.

Figure 14.PNG

Figure 14.  Same as Figure 7.

Figure 15 is a repeat of what was seen in Figures 8 and 9.

Figure 15.PNG

Figure 15.  Same as Figures 8 & 9.

While I am not showing it here, just know that the end of this file looks the same as the first (seen in Figure 10).

In both instances, after having received a set of results, I chose ones that I knew would trigger Google Chrome, so I thought there would be some traces of my activities there.  I started looking at the History.db file, which shows a great deal of Google Chrome activity.  If you aren’t familiar, you can find it in the data\com.android.chrome\app_chrome\Default folder.  I used ol’ trusty DB Browser for SQLite (version 3.10.1) to view the contents.

As it turns out, I was partially correct.

Figure 16 shows the table “keyword_search_terms” in the History.db file.

Figure 16.PNG

Figure 16.  Something(s) is missing.

This table shows search terms used Google Chrome.  The term shown, “george hw bush,” is one that that I conducted via Chrome on 12/01/2018 at 08:35 AM (EST).  The terms I typed in to GSB to conduct my searches, “dfir” and “nist cfreds,” do not appear.  However, viewing the table “urls,” a table that shows the browsing history for my test Google account, you can see when I went to the AboutDFIR and CFReDS Project websites.  See Figures 17 and 18.

Figure 17

Figure 17.  My visit to About DFIR.

Figure 18.PNG

Figure 18.  My visit to NIST’s CFReDS.

The column “last_visit_time” stores the time of last visit to the site seen in the “url” column.  The times are stored in Google Chrome Time (aka WebKit time), which is a 64-bit value in microseconds since 01/01/1601 at 00:00 (UTC).  Figure 19 shows the time I visited AboutDFIR and Figure 20 shows the time I visited CFReDS.

Figure 19

Figure 19.  Time of my visit to About DFIR.

Figure 20

Figure 20.  Time of my visit to NIST’s CFReDS.

I finished searching the Chrome directory and did not find any traces of the search terms I was looking for, so I went back over to the GSB directory and looked there (other than the binarypb files).  Still nothing.  In fact, I did not find any trace of the search terms other than in the binarypb files.  As a last-ditch effort, I ran a raw keyword search across the entire Nougat image, and still did not find anything.

This could potentially be a problem.  Could it be that we are missing parts of the search history in Android?  The History.db file is a great and easy place to look and I am certain the vendors are parsing that file, but are the tool vendors looking at and parsing the binarypb files, too?

As I previously mentioned, I also had access to an Oreo image, so I loaded that one up and navigated to the com.google.android.googlequicksearchbox\app_session folder.  Figure 21 shows the file listing.

Figure 21.PNG

Figure 21.  File listing for Oreo.

The file I chose here was 26719.binarypb.  This search occurred on 02/02/2019 at 08:48 PM (EST).  The search was conducted while the phone was sitting in front of me, unlocked and displaying the home screen.  The term I typed in to the GSB was “apple macintosh classic.”  I was presented with a few choices but took no action beyond that.  Figure 22 shows the beginning of the file in which the “search” string can be seen in the blue box.

Figure 22.PNG

Figure 22.  Top of the new file.

Figure 23 shows an area just about identical to that seen in Nougat (Figures 5 and 12).  My search term can be seen in the purple box and a time stamp in the red box.  The time stamp converts to decimal 1549158503573 (Unix Epoch Time).  The results can be seen in Figure 24.

Figure 23.PNG

Figure 23.  An old friend.

Figure 24

Figure 24.  Time when I searched for “apple macintosh classic.”

Figure 23 does show a spot where Oreo differs from Nougat.  The 4-byte in the green box that appears just before the search term, 0x50404004, is different.  In Nougat, the first byte is 0x40, and here it is 0x50.  A small change, but a change, nonetheless.

Figure 25 shows a few things that appeared in Nougat (Figures 7 & 14).

Figure 25

Figure 25.  The same as Figures 7 & 14.

As seen, the search term is in the purple box, the search term is wrapped in the orange box, the 4-byte string appears in the green box, and the 5-byte string seen in the Nougat and the Google Assistant files is present (blue box).

Figure 26 shows the same objects as those in the Nougat files (Figures 8, 9, & 15).  The 16-byte string ending in 0x12, the 4-byte string (green box), my search term (purple box), some type of identifier (orange box), and the time stamp (red box).

Figure 26.PNG

Figure 26.  Looks familar…again.

While not depicted in this post, the end of the file looks identical to those seen in the Nougat files.

Just like before, I traveled to the History.db file to look at the “keyword_search_terms” table to see if I could find any artifacts left behind.  See Figure 27.

Figure 27.PNG

Figure 27.  Something is missing…again.

My search term, “apple macintosh classic,” is missing.  Again.  I looked back at the rest of the GSB directory and struck out.  Again.  I then ran a raw keyword search against the entire image.  Nothing.  Again.

Out of curiosity, I decided to try two popular forensic tools to see if they would find these search terms.  The first tool I tried was Cellebrite Physical Analyzer (Version 7.15.1.1).  I ran both images through PA, and the only search terms I saw (in the parsed data area of PA) were the ones that were present in Figures 16 & 27; these terms were pulled from the “keyword_search_terms” table in the History.db file.  I ran a search across both images (from the PA search bar) using the keywords “dfir,”“cfreds,” and “apple macintosh classic.”  The only returned hits were the ones from the “urls” table in the History.db file  of the Nougat image; the search term in the Oreo image (“apple macintosh classic”) did not show up at all.

Next, I tried Internet Evidence Finder (Version 6.23.1.15677).  The Returned Artifacts found the same ones Physical Analyzer did and from the same location but did not find the search terms from GSB.

So, two tools that have a good foot print in the digital forensic community missed my search terms from GSB.  My intentions here are not to to speak ill of either Cellebrite or Magnet Forensics, but to show that our tools may not be getting everything that is available (the vendors can’t research everything).  It is repeated often in our discipline, but it does bear repeating here:  always test your tools.

There is a silver lining here, though.  Just to check, I examined my Google Takeout data, and, as it turns out, these searches were present in what was provided by Google.

Conclusion

Search terms and search history are great evidence.  They provide insight in to a user’s mindset and can be compelling evidence in a court room, civil or criminal.  Google Search Bar provides users a quick and convenient way to conduct searches from their home screen without opening any apps.  These convenient searches can be spontaneous and, thus, dangerous; a user could conduct a search without much thought given to the consequences or how it may look to third parties.  The spontaneity can be very revealing.

Two major/popular forensic tools did not locate the search terms from Google Search Bar, so it is possible examiners are missing search terms/history.  I will be the first to admit, now that I know this, that I have probably missed a search term or two.  If you think a user conducted a search and you’re not seeing the search term(s) in the usual spot, try the area discussed in this post.

And remember:  Always.  Test.  Your.  Tools.

Update

A few days after this blog post was published, I had a chance to test Cellebrite Physical Analyzer, version 7.16.0.93.  This version does parse the .binarypb files, although you will get multiple entries for the same search, and some entries may have different timestamps.  So, caveat emptor; it will be up to you/the investigator/both of you to determine which is accurate.

I also have had some time to discuss this subject further with Phil Moore (This Week in 4n6), who has done a bit of work with protobuf files (Spotify and the KnowledgeC database).  The thought was to use Google’s protoc.exe (found here) to encode the .binarypb files and then try to decode the respective fields.  Theoretically, this would make it slightly easier than manually cruising through the hexadecimal and decoding the time manually.  To test this, I ran the file 26719.binarypb through protoc.exe.  You can see the results for yourself in Figures 28, 29, and 30, with particular attention being paid to Figure 29.

Figure 28

Figure 28. Beginning of protoc output.

 

Figure 29

Figure 29.  Middle part of the protoc output (spaces added for readability).

 

Figure 30

Figure 30.  Footer of the protoc output.

In Figure 28 the “search” string is identified nicely, so a user could easily see that this represents a search, but you can also see there is a bunch of non-sensical data grouped in octets.  These octets represent the data in the .binarypb file, but how it lines up with the hexadecimal values/ASCII values is anyone’s guess.  It is my understanding that there is a bit of educated guessing that occurs when attempting to decode this type of data.  Since protobuf data is serialized and the programmers have carte blanche in determining what key/value pairs exist, the octets could represent anything.

That being said, the lone educated guess I have is that the octet 377 represents 0xFF.  I counted the number of 377’s backwards from the end of the octal time (described below) and found that they matched (24 – there were 24 0xFF’s that proceeded the time stamp seen in Figure 23).  Again, speculation on my part.

Figure 29 is the middle of the output (I added spaces for readability).  The area in the red box, as discovered by Phil, is believed to the be the timestamp, but in an octal (base-8) format…sneaky, Google.  The question mark at the end of the string lines up with the question mark seen at the end of each timestamp seen in the figures of this article.  The area in the green box shows the first half of the Java wrapper that was discussed and seen in Figure 25.  The orange box contains the search string and the last half of the Java wrapper.

Figure 30 shows the end of the protoc output with the and.gsa.d.ssc.16 string.

So, while there is not an open-source method of parsing this data as of this writing, Cellebrite, as previously mentioned, has baked this into the latest version of Physical Analyzer, but care should be taken to determine which timestamp(s) is accurate.