Venmo. The App for Virtual Ballers.

I recently went on a trip which required hanging out in a couple of airport terminals. While waiting on my flights I saw the usual scene: a sea of people staring down at their phones. I am not going to delve into the obvious security concerns (whole different topic), but I was able to see many home screens from my vantage point, and I noticed a few consistent apps. One was a blue square/circle with a white ‘V’ in it: Venmo, the social media payment app.

Full disclosure: I am not a Venmo person. I live in Apple’s walled garden and like to use Apple Pay whenever I can. I am reluctant to provide any merchant or service any financial information as I have been caught up in a couple of data breaches, specifically, Target and Home Depot. While those two incidents involved POS weaknesses, the pain they caused made me extremely paranoid when it comes to my financial information.

Not everyone has that attitude, however. While conducting testing for this post I was amazed at how many people in my contact list use Venmo. My spouse even uses it (she had to show me the ropes when I first fired up the app). As of April of 2019, Venmo reported 40 million active users with payment volume around $21 million; it is expected that payment volume will hit $100 million for the entirety of 2019. Comparatively, it sits behind its parent company, PayPal, and Amazon Pay. Third isn’t too shabby in the mobile payment space.

Because Venmo is owned by PayPal, users can, at many merchants, use Venmo where PayPal is accepted, but using it at merchants isn’t Venmo’s main draw. It is being able to pay people for whatever reason without having to use cash…especially since most people don’t accept cards and a majority of people, I would argue, don’t carry cash around. Venmo is also a social media app. It has a feed, similar to other social media feeds, that shows you what your “friends” are buying and selling. For example, in my Venmo social feed I saw payments for “schnacks,” daycare, date nights, booze, haircuts, groceries, picking up prescription medications, pizza, and a corn hole tournament. These were person-to-person transactions, not person-to-merchant. Paying people for whatever just by clicking a few buttons can be a lot easier than paying with cash (again, who does that nowadays???) or trying to figure out who accepts cards and who doesn’t. For some small merchants, it may be cheaper to use Venmo versus accepting cards and having to pay bank/card transaction fees.

There is also a public feed that shows what people (who you are not friends with) are buying and selling. This is notable as the default setting for transactions is “public” (more on that later).

This idea of a social feed is important. File it away for now because I will bring it up later when discussing iOS.

Like everything else, people have found a way to use Venmo for questionable activities. I found a site, Vicemo, that uses Venmo’s public API to track what it thinks are payments for booze, strippers, and narcotics. In 2017 there was a study that found a third of Venmo users have paid for weed, cocaine, and Adderall with Venmo, and that 21 percent of Venmo users have used it to make bets on sporting events.

Is This Useful?

Generally speaking, having access to this data can be beneficial as it can be considered pattern-of-life (POL) data. A lot can be gleaned from knowing what a person purchases or sells, and from/to whom they are purchasing or selling. Just think about the entry in my social feed regarding the prescription medication. Based on that entry, I may be able to deduce that person has a particular medical condition if the payment note has enough information in it. I could also look in my public feed and find similar bits of information.

This data can be a gold mine for investigators, and I would argue an investigator’s imagination is the limit for its application. For example, think of the drug dealer who is using Venmo for payments. Want to know who the buyers are or who the source is? Just pull the Venmo data, do some analysis, and start slinging subpoenas/court orders/search warrants for Venmo account information. What about the local fence who operates in your jurisdiction? Want to know who the fence sold some stolen property to? Same scenario.

My point is people do not randomly give money away. They are paying for things they need, want, or are significant to them in some fashion. Understanding this allows an investigator to draw conclusions by taking this data and applying some deductive reasoning.

POL data is extremely useful. There has been a lot of research during the past couple of years that has shown just how much data about our lives is collected by our mobile devices. This data, when interpreted correctly, can be powerful.

The Setup

As usual, just a few notes on the hardware/software used.

Pixel 3:   Android 10 (patch level 9/5/19), rooted, Venmo version 7.40.0 (1539)

iPhone SE:  iOS 13.2, non-jailbroken, Venmo version 7.40.1 (1)

For testing I used a couple of pre-paid cards that I picked up at a Wal-Mart. I had to register the cards first with their respective “banks”, and then load them into Venmo; if you were to use a card issued to you the registration part would not be necessary. Also, you have the option to link an actual bank account to your Venmo account, which is something I did NOT do.

I will add that during my analysis, I did not find where either OS stored the actual card information, even though I could see it within the UI. It may be there, encoded somehow, but I didn’t see it.

Android

Before we go down the forensic road, I want to show what a transaction screen looks like in the Android UI. It will help make things clearer.

Figure 1 shows the payment screen. In the red box is the transaction recipient. While I didn’t do it during testing, and you can add more than one recipient here. The green box is obvious…the amount of money I am sending. The blue box shows the note that is attached to the payment. Venmo requires that there be something in this field, even if it is an emoji. The orange box is the share-level of the transaction. It can be set to Public (everyone on Venmo sees it), Friends Only, or Private (just you and the recipient see the transaction). The default is Public, but you can change the default setting from the Settings menu.

Figure 1.png
Figure 1.  Android payment UI.

The purple box dictates what happens to this transaction. If you hit “Pay” the money is sent. If you hit “Request” a request goes to the transaction recipient who can either pay or decline to pay the requested amount. More on those in a bit.

There is one other way to send someone money. An intended transaction recipient can display a QR code on their device, which the sender can scan, fill out the note and amount fields, and then hit “Pay” or “Request.”

Venmo data in Android resides in /data/data/com.venmo. Because of new technical hurdles with Android 10 on the Pixel 3, I had to copy the folder over to the /sdcard/Download folder via shell and then run an ADB pull from there.

Unlike my previous post about Wickr, Venmo databases are pretty straight forward. There are three files of interest, with the biggie being venmo.sqlite. The database can be found in ./databases folder. See Figure 2.

Figure 2.png
Figure 2.  Available databases in Android.

There is an additional database of interest here, too, named mparticle.db, which I will discuss shortly.

This database contains all of the transaction information along with any comments about transactions and any responses to the transactions. Take a look at the screen shot in Figure 3, which shows a completed transaction. The iOS device paid the Android device $5.00 on 10/31/2019 at 9:56 PM (EDT). The note sent with the payment was “For the testing stuff. My first payment.” The Android device responded to the comment (no time stamp is seen).

 

Figure 3.png
Figure 3. Android screenshot of a completed payment.

Figure 4 shows the table marvin_stories. You will notice transactions are called stories, and that timestamps are stored in Zulu time in human-readable format.

Figure 4.png
Figure 4.  The marvin_stories table.

The columns of interest here are story_data_created, story_note, and story_audience. However, if you want to get all of that information and more in one place, you need to look at the contents of the last column, story_blob. See Figure 5.

Figure 5
Figure 5.  Transaction information from the story_blob column.

The contents of these cells are broken up in to three parts. Actor (transaction sender), transaction information, target (transaction recipient). If you recall from Figure 2, the iOS device sent the Android device $5.00. So, the actor, the iOS device, is seen in the green box. There you can see the action (“pay” the Android device), the display name, friend status, the user ID for that account, last name, how many mutual friends we may have, and the actual user name (Josh-Hickman-19).

In the purple box you find the transaction information: the amount of money (5.0 – Venmo drops tailing zeros occasionally), the date the transaction was created and completed, the transaction ID (more on that in a moment), the note that came with the payment, and the fact that it was “settled.” The status of “settled” is the only status I have seen and I have not been able to change the value in this field.

The orange box has the transaction recipient, or the target, which is the Android device (ThisIs-DFIR). It has the same information as the green box.

When a transaction is completed in Venmo, both parties get an email notification. The transaction ID seen in the purple box in Figure 5, also appears in the email (Figure 6, red box).

Figure 6
Figure 6.  Email with transaction ID.

The column story_likes_blob is simple, and is seen in the red box in Figure 7. If there is a value present in this cell it means someone “liked” the transaction. An example of how that appears in the UI is seen in Figure 8.

Figure 7.png
Figure 7.  story_likes_blob column.

 

Figure 8.png
Figure 8.  A “liked” transaction.

Values in the column are also stored in JSON. The “like” for the transaction in Figure 8 is displayed in Figure 9. As you can see, much of the information in the transaction JSON data is also present here.

Figure 9.png
Figure 9.  A “like” in the database.

A note about the privacy level of transactions. After a transaction is completed, it goes into a social feed. The feed it goes into is dependent upon what privacy level setting was chosen at the time the transaction was sent; however, the transaction privacy level can be changed by any party that was part of the transaction at any time. If the privacy level is changed, the transaction is removed from the original feed and placed in the new feed.

The reason privacy level is important is that people not part of the transaction may be able comment on it (because they can see it). Their ability to do so is dependent upon the privacy level. For example, if I set the privacy level of a transaction as “Private,” only the other person (or persons if there is more than one party) can comment on the transaction. If I set the privacy level to “Friends Only,” then my “friends” in Venmo can comment on the transaction along with the parties involved. If I set the privacy level to “Public” anyone on Venmo can comment on the transaction.

Figure 10 shows a transaction that has a comment. As you can see in the blue box, I set the privacy level of this particular transaction to “Friends Only,” which means any of my Venmo “friends” could add a comment to this transaction if they chose to do so.

Figure 10.png
Figure 10.  A transaction with a comment.

Getting back to the database, the table comments contains information about comments made on transactions. See Figure 11.

Figure 11
Figure 11.  The comments table.

Four columns are important here. The first is obvious: the comment_message column (orange box). This contains all of the comments made. The column next to it, created_time is the time the comment was left (blue box). The column in the red box, comment_story_owner, is important as it links the comment back to the transaction. The values in this column correspond to the values the story_id column in the marvin_stories table. Note that some transactions may not have entries in the table if no one comments on them.

The column seen the green box, comment_user_blob contains JSON data, and contains much of the same JSON data seen in previous figures. This documents who made the comment. If you are in a hurry and don’t want to look at the JSON data, you can clearly see them username value of the user who made the comment in the cell. 🙂

The last table in this database is table_person, and contains data about anyone with whom you have conducted a transaction. The data is simple. See Figure 12.

Figure 12
Figure 12.  table_person.

The next file of interest is also a database: mparticle.db. It resides in the same location as venmo.sqlite. The only data I was able to generate in this database appears in the attributes table. See Figure 13.

Figure 13
Figure 13.  Attributes from the mparticle.db.

The columns attribute_key, attribute_value, and created_time (stored in Unix Epoch) are straight forward. One note about the time, though. Users are able to change their display names and phone numbers, and if they do, the created_time values will also change. I changed my first and last name, but left the phone number. The created_time for the phone number in Figure 13 corresponds to when I created the account, but the other values correspond to the time I last made changes to my first and last name.

The last file is venmo.xml, and it resides in ./shared_prefs folder. This file contains some basic information along with an additional nugget or two. Figure 14 shows the first part of the file.

Figure 14.png
Figure 14  vemo.xml, Part 1.

In the green box is the userID for my “top friend.” Seeing how I had only one friend for this particular account, the value relates back to the account on the iOS device. The blue box has the timestamp for the last time time I sync’d my contacts, the yellow box has the email account associated with the Android account, and the red box contains the userID and last last name associated with the Android account.

Figure 15 has three items, the full name (display name) and phone number associated with the account (red box), and the display first name (blue box). The names seen here correspond to the values seen in the attributes table in the mparticle.db.

Figure 15.png
Figure 15.  venmo.xml, Part 2.

Figure 16 has two items. The first is seen in the green box, and is the user name on the Android account. The other, the user_balance_string value, corresponds to the amount of money this account has in Venmo, which can be seen in the blue box in Figure 17.

Figure 16.png
Figure 16.  venmo.xml, Part 3.
Figure 17.png
Figure 17.  Venmo balance.

When a transaction is completed in Venmo, the money goes into what I like to think of as a staging area. This staging area holds the funds, and you can either use them in other transactions, or transfer them to a bank account if you have one linked. You can also have the balance sent to a credit card, but the credit card has to be a certain type, and my pre-paid card didn’t meet the criteria.

Paying someone via “Pay” or scanning a QR code is not the only way a user can get paid. A user can request payment. Remember the “Request” button in Figure 1? Well, there is a way to tell if that was the method by which funds were transferred. See Figure 18.

Figure 18.png
Figure 18.  The “charge” action.

Figure 18 has the same data in the same green, purple, and orange boxes as seen in Figure 5. The only difference here is the “action” field at the top of the green box has changed from “pay” to “charge” (red arrow). This indicates the transaction occurred because the Actor (the transaction sender in the green box) box requested payment, and the Target (the transaction recipient in the orange box) granted the request. As a side note, you can see that there are more digits in the “amount” field this time (purple box).

I tried requesting payment and declining the request three times to see what was left behind. Interestingly enough, there is no record of those three requests anywhere. Venmo may keep this data server side, but you will not get it from the device.

That’s it for Android. I had to triple check everything because I thought I was missing something, again, because of the amount of work involved in the Wickr post. That being said, there are other capabilities of Venmo I did not test, such as Facebook, Snapchat, and Twitter integration, along with linking my bank account. As I previously mentioned, I was surprised I did not find any references to the bank card used for transactions; it may be there, but I did not find it.

iOS

Good news: Venmo data is available in an encrypted iOS backup in iOS 13.2, so I assume that is the same for previous iOS versions. This is great because that means all an examiner needs to do is create a backup, or use their preferred forensic extraction tool to get this data. Nothing else was necessary.

iOS contains the same data as the Android version, just in a different format. There was also a few additional bits of data I found that could be useful in certain situations, and I will review that as well.

Transaction information is stored in the net.kortina.labs.Venmo/Documents folder. Remember when I mentioned the idea of a social feed being important? Well, iOS is why. Unlike Android, iOS stores Venmo transactions in three different files. Which file it stores a transaction in depends on the privacy level set at the time of the transaction. See Figure 19.

Figure 19.png
Figure 19.  Contents of the net.kortina.labs.Venmo/Documents folder.

The files FriendsFeed, PrivateFeed, and PublicFeed“should look familiar. These are the three privacy level settings for Venmo transactions. These three files are binary plist files. Figure 20 shows the same transaction as seen in Figure 5, and comes from the PrivateFeed file.

Figure 20.png
Figure 20.  Same transaction as Figure 5.

Reading this is a little tricky as the data is not as neatly grouped as that in the Android database. First, the file stores data from the bottom up, chronologically speaking. Second, this transaction was found in the PrivacyFeed file (remember the privacy level setting seen in the orange box in Figure 1). To help see the comparison to Android, I grouped the actor, transaction information, and target data using the same green, purple, and orange boxes. In the red box is the comment the Android account left for this transaction, complete with the timestamp; see Figure 21 for how this looks in the iOS UI.

Figure 21.PNG
Figure 21.  Comment from Figure 20.

One thing you will notice is that the “action” and status fields are missing. I looked in this file and the only time these fields showed was the most recent transaction that occurred with the privacy level set to Private (at the top of the PrivacyFeed file). The rest of the transactions in this feed did not have an action or status associated with them. I may not necessarily be reading this bplist file correctly, so if someone else knows, please let me know so I can update this post.

Figure 22 was the most recent “Private” transaction on the iOS account. Figure 23 shows the data in the bplist file with the usual groupings (green box = actor, purple box = transaction information, orange box = target), and Figure 24 shows that same transaction in the venmo.sqlite database from the Android device. Notice the “action” and “status” fields are present in Figure 23.

Figure 22.png
Figure 22.  Last “private” transaction on the iOS account.
Figure 23.png
Figure 23.  bplist entry for the transaction in Figure 22.
Figure 24.png
Figure 24.  Android entry for the comparison.

Figure 25 shows the “Request” transaction from Figure 18. Again, I used the same colored boxes to the group the data. Note the red box at the bottom; it contains the privacy level setting for the transaction.

Figure 25.png
Figure 25.  The “request” transaction from the iOS point of view.

Since we are on transactions I will head over to the “FriendsFeed” for just a moment. This is an additional bit of information that I saw in iOS that was not present in Android. This feed contains a lot of information about your Venmo friends’ transactions. The data is not as verbose as the data about my transactions, but you can see when transactions occur, the transaction information (time, note, and transaction ID – purple box), the actor (green box) and target (orange box). Notice the action and status fields are missing. Because the target and actor are not aware of this blog post, I have redacted pertinent information. See Figure 26.

Figure 26
Figure 26.  A transaction by one of my Venmo friends.

If you recall from Figure 19, there is a database present named Model.sqlite. This database contains information about a user’s Venmo friends. The table ZMANAGEDUSER contains friend information. See Figure 27.

Figure 27.png
Figure 27.  Venmo friends.

These columns are fairly self-explanatory. ZCREATEDAT is the time the user account was created. ZDISPLAYNAME is the display name for the account. ZFAMILYNAME is the user’s last name. ZGIVENNAME is the user’s first name. ZIDENTIFER is the user’s userID, and ZUSERNAME is the user’s user name.

The PublicFeed file contains the same data as the Private and Friends versions, but includes data about people with whom the user is not friends with.

The next interesting file is net.kortina.labs.Venmo.plist which is found in the ./Library/Preferences folder. The top part of the file is seen in Figure 28.

Figure 28.png
Figure 28.  net.kortina.labs.Venmo.plist, Part 1.

The data in the red square is the iOS equivalent to the Android data seen in Figure 13. The blue square references Venmo friends. I expanded it in Figure 29.

Figure 29.png
Figure 29. plist friends from net.kortina.labs.Venmo.plist.

The data in Figure 29 comes from the user’s contact list on device.

Figure 30 is the rest of the net.kortina.labs.Venmo.plist. The data in the red squares are the display name, the email address associated with the account, and the phone number associated with the account. You can see, based on the data in the blue square, that I changed my default privacy level to “Friends.”

Figure 30.png
Figure 30. net.kortina.labs.Venmo.plist, Part 2.

Figure 31 shows the file com.google.gmp.measurement.plist found in the same directory as the previous file. The app version and the first time the app was opened can be found in the red and blue squares, respectively.

Figure 31.png
Figure 31. com.google.gmp.measurement.plist.

That’s it for iOS. The data is readily available, and includes data that is not necessarily present in Android. As with Android, I did not find any information on the card attached to the account, and I did not test any of the linking or integration abilities.

Conclusion

Virtual payments are a thing, no doubt about it. Amazon Pay, PayPal, Venmo, Zelle and many others along with the platform-specific pay services like Apple Pay and Google Pay are all trying to help people pay for things. Venmo is ranked 3rd in that group, and stands out from the crowd by mixing payment services with social media capabilities.

Venmo contains POL data, which can be valuable, and the data is versatile. It can be used to extrapolate information about the user which can be useful for investigators.

Wickr. Alright. We’ll Call It A Draw.

WickrPrompt
Ugh.  Not again.

Portions of this blog post appeared in the 6th issue of the INTERPOL Digital 4n6 Pulse newsletter. 

I would like to thank Heather Mahalik and Or Begam, both of Cellebrite, who helped make the Android database portion of this blog post possible, and Mike Williamson of Magnet Forensics for all the help with the underpinnings of the iOS version.

I have been seeing the above pop-up window lately.  A  lot.  Not to pick on any particular tool vendor, but seeing this window (or one simmilar to it) brings a small bit of misery.  Its existence means there is a high probability there is data on a particular mobile device that I am not going to get access to, and this is usually after I have spent a considerable amount of time trying to gain access to the device itself.  Frustrating.

One of my mentors from my time investigating homicides told me early in my career that I was not doing it right unless I had some unsolved homicides on the books; he felt it was some sort of badge of honor and showed a dedication to the craft.  I think there should be a similar mantra for digital forensic examiners.  If you conduct digital forensic examinations for any substantial amount of time you are going to have examinations where there is inaccesible data and nothing you do is going to change that.  You can throw every tool known to civilization at it, try to manually examine it, phone a friend, look on Twitter, search the blog-o-sphere, search the Discord Channel, query a listserv, and conduct your own research and you still strike out.  This is a reality in our discipline.

Not being able to access such data is no judgment of your abilities, it just means you may not win this round.  Keep in mind there is a larger fight, and how you react to this setback is a reflection of your dedication to our craft.  Do you let inaccessible data defeat you and you give up and quit, or do you carry on with that examination, getting what you can, and apply that same tenacity to future examinations?

One needs the latter mindset when it comes to Wickr.  For those that are unfamilar, Wickr is a company that makes a privacy-focused, ephemeral messaging application. Initially available as an iOS-only app, Wickr expanded their app to include Android, Windows, macOS, and Linux, and branched out from personal messaging (Wickr Me) to small teams and businesses (Wickr Pro – similar to Slack), and an enterprise platform (Wickr Enterprise).  Wickr first hit the app market in 2012, and has been quietly hanging around since then.  Personally, I am surpised it is not as popular as Signal,  but I think not having Edward Snowden’s endorsement and initially being secretive about its protocal may have hurt Wickr’s uptake a bit.

Regardless, this app can bring the pain to examinations.

Before I get started, a few things to note.  First, this post encompasses Android, iOS, macOS, and Windows.  Because of some time constraints I did not have time to test this in Linux.  Second, the devices and respective operating system versions/hardware are as follows:

Platform                    Version                 Device                                                   Wickr Version


Android                      9.0                         Pixel 3                                                    5.22


iOS                               12.4                       iPhone XS and iPad Pro 10.5            5.22


macOS                        10.14.6                  Mac Mini (2018)                                  5.28


Windows 10 Pro       1903                      Surface Pro                                           5.28

Third, Wickr Me contains the same encryption scheme and basic functionality as the Pro and Enterprise versions: encrypted messaging, encrypted file transfer, burn-on-read messages, audio calling, and secure “shredding” of data. The user interface of Wickr Me is similar to the other versions, so the following sections will discuss the functionality of Wickr while using the personal version.

Finally, how do you get the data?  Logical extractions, at a minimum, should grab desktop platform Wickr data during an extraction.  For Android devices, the data resides in the /data/data area of the file system, so if your tool can get to this area, you should be able to get Wickr data.  For iOS devices, you will need a jailbroken phone or an extraction tool such as one that is metal, gray, and can unlock a door to get the Wickr database.  I can confirm that a backup nor a logical extraction contains the iOS Wickr database.

Visual Walkaround

Wickr is available on Android, iOS, macOS, and Windows, and while these platforms are different, the Wickr user interface (UI) is relatively the same across these platforms. Figure 1 shows the Windows UI, Figure 2 shows the macOS UI, Figure 3 shows the iPhone, and Figure 4 shows the iPad. The security posture of the Wickr app on Android prevents screenshots from being taken on the device, so no UI figure is available.  Just know that it looks very similar to Figure 3.

Figure 1
Figure 1.  Wickr in Windows.
Figure 2.png
Figure 2.  Wickr in macOS.

 

Figure 3.png
Figure 3.  Wickr on iPhone.
Figure 4
Figure 4.  Wickr on iPad.

Each figure has certain features highlighted. In each figure the red box shows the icons for setting the expiration timer and burn on read (setting that allows the sender of a message to set a self-destruction timer on a message before it is sent – the recipient has no control over this feature), the blue arrow shows the area where a user composes a message, the orange arrow shows the area where conversations are listed, and the purple arrow shows the contents of the highlighted conversation (chosen in the conversations list).  Not highlighted is the phone icon seen in upper right corner of each figure. This initiates an audio call with the conversation participant(s).

The plus sign seen in the screen (red boxes) reveals a menu that has additional options: send a file (including media files), share a user’s location, or use one of the installed quick responses. Visually, the menu will look slightly different per platform, but the functionality is the same.  See Figure 5.

Figure 5
Figure 5.  Additional activity options (Windows UI).

The sending and receiving of messages and files works as other messaging applications with similar capabilities. Figure 6 shows an active conversation within the macOS UI.

Figure 6.png
Figure 6.  A conversation with a text message and picture attachments (macOS UI).

Wickr is similar to Snapchat in that messages “expire” after a set period of time. The default time a message is active is six (6) days, which is the maximum amount of time a message can be available, but a user can set message retention times as short as one second. This setting is device specific; if a user has multiple devices they can choose different retention periods for each device.

Users can also set “burn-on-read” times in which a message will expire (“burn”) after a certain period of time after the message has been read. This setting is controlled by the message sender, regardless of the recipient’s message retention period setting.  The retention period for burn-on-read messages can also be set anywhere between 1 second and 6 days. Figure 7 shows the Windows Wickr UI when a burn-on-read message has been received and opened (bottom of the active conversation window pane), and Figure 8 shows the same UI after the burn-on-read retention period expired.

Figure 7
Figure 7.  A burn-on-read message (timer in red).
Figure 8
Figure 8.  Poof! The message has been burned.

The Secure Shredder function is Wickr’s feature by which data that has been deleted by the app is rendered unrecoverable by overwriting the deleted data.  Secure Shredder is an automated feature that runs in the background but has a manual configuration feature if a user is part of the Wickr Pro Silver or Gold tiers, which allows users to manually initiate the function.  Testing showed this feature automatically runs every +/- one (1) minute while the device is idle.

Encryption.  All Of The Encryptions.

Wickr is designed with total privacy in mind, so all three versions use the same encryption model. The app not only protects messages, media, and files in transit, but it also protects data at rest. The app has been designed with perfect forward secrecy; if a user’s device is compromised, historical communications are still protected unless the attacker has the user’s password and the messages have not expired.

When a new message is received, it arrives in a “locked” state.  See Figure 9.

Figure 9.png
Figure 9.  A new message.

When a message is sent, the sender’s device will encrypt the message using a symmetric key. To generate the symmetric key, internal APIs gather random numbers which are then run through the AES-256 cryptographic algorithm in Galois/Counter Mode (GCM). Each message is encrypted using a new symmetric key, and this operation occurs strictly on the sender’s device. This encryption happens regardless of whether the message contains text, a file, or a combination of the two. The cipher text and the symmetric key (i.e. the package) are encrypted using the signed public key of the recipient’s device (I’ll discuss asymmetric operations in a minute), and then sent to the recipient who then decrypts the package using their private key. The symmetric key is then applied to the cipher text in order to decrypt it.

The takeaway here is that unlocking a received message = decrypting a received message.  A user may set their device to automatically unlock messages, but the default behavior is to leaved them locked on receipt and manually initiate the unlock.

Asymmetric operations are applied to messages in transit.  As previously mentioned, cipher text and the symmetric key used encrypt it are packaged up and encrypted using the public key of the intended recipient’s device.  The public key is signed with components from said device.  The recipient device uses the corresponding private key to decrypt the package, and then the symmetric key is used to decrypt the cipher text  (unlocking the message) so the recipient can read it. If a message is intended for multiple recipients or for a recipient who has multiple devices, a different set of keys is used for each destination device.

Here is where the pain starts to come. The keys used in the asymmetric operations are ephemeral; a different set of public/private key pairs are used each time a message is exchanged between devices. Wickr states in its technical paper that pools of components (not the actual keys themselves) of private-public pairs are created and refreshed by a user’s device while they are connected to Wickr’s servers.  If a device is disconnected from the Wickr servers, it will use what key pairs it has, and will then refresh its pool once it has re-established the connection.

Even if a private key is compromised, the only message that can be decrypted is the one that corresponds to that specific private/public key pair; the rest of the messages are still safe since they use different pairs.

But wait, it gets worse.  Just to turn the knife a bit more, Wickr has a different encryption scheme for on-device storage that is separate from message transfers.  When Wickr is first installed on a device a Node Storage Root Key (Knsr) is generated. The Knsr is then applied to certain device data (described as “device specific data and/or identifiers derived from installed hardware or operating system resources that are unique, constant across application installs but not necessary secret“) to generate the Local Device Storage Key (Klds). The Klds is used to encrypt Wickr data stored locally on the device, including files necessary for Wickr to operate.

The Klds is itself encrypted using a key derived from the user’s password being passed through scrypt. When a user successfully logs in to the Wickr app, the Klds is decrypted and placed into the device’s memory, allowing for successful exposure of the locally stored Wickr data through the app UI. When the app is terminated, placed in an inactive state, or a user logs out, the Klds is removed from memory, and the Wickr data is no longer available.

For those who have legal processes at their disposal (court orders, search warrants, & subpoenas), the news is equally dire.  Wickr does keep undelivered messages on their servers for up to six (6) days, but, as I previously mentioned, the messages (which are in transit) are encrypted.  Wickr states they do not have acess to any keys that would decrypt what messages are stored.  There is some generic account and device information, but no message content.  For more information on what little they do have, please read their legal process guide.

So, Is There Anything I Can Actually Get?

The answer to this question is the standard digital forensics answer:  “It depends.” The encryption scheme combined with the on-device security measures makes it extremely difficult to recover any useful data from either the app or Wickr, but there is some data that can be retrieved, the value of which depends on the goal of the examination.

Testing has shown a manual examination is the only way, as of the time of this post, to recover message content from iOS, macOS, and Windows (files not included). This requires unfettered access to the device along with the user’s Wickr password. Due to a change in its encryption scheme (when this happened is unknown), Wickr is not supported by any tool I tested on any platform, which included the current versions of Cellebrite, Axiom, and XRY. This included the Android virtualization options offered by two mobile vendors.  Along those same lines, I also tried Alexis Brignoni’s virtualization walkthrough using Nox Player, Virtual Box, and Genymotion, with no luck on all three platforms.

Things can be slightly different for those of you who have Wickr deployed in an enterprise environment.  The enterprise flavor of Wickr does have compliance (think FOIA requests and statutory/regulatory requirements) and eDiscovery features, which means message content may be retained so as long as the feature is enabled (I didn’t have access to this version so I could not ascertain if this was the case).  Just be aware that if the environment includes Wickr, this may be an option for you.

The type and amount of data an examiner can possibly get is dependant upon which platform is being examined. The nice thing is there is some consistency, so this can help examiners determine, rather quickly, if there is anything to be recovered.  The consistencey can be broken up in to two categories:  iOS and Android/macOS/Windows. One thing is consistent across ALL platforms, though: an examiner should not expect to find any message content beyond six (6) days from the time of examination.

Android/macOS/Windows

The most important thing to remember for Android, macOS, and Windows platforms is that order matters when doing a manual examination. That is, the order in which you examine the device for Wickr content is important. Failure to keep this mind may result in recoverable data being deleted unnecessarily.  Android can be slightly different, which I will discuss shortly.

I will go ahead and get one thing out of the way with all three platforms: the databases containing account information, conversation information, contacts, and message content are all encrypted.  The  database is protected with SQL Cipher 3, and the Wickr user password is not the password to the database (I tried to apply the correct Wickr password in DB Browser for SQLite, and none would open – you’ll see why below). Figure 10 shows the macOS database in hexadecimal view and Figure 11 shows the Windows database.  While not shown here, just know the Android database looks the same.

Figure 10.png
Figure 10.  Wickr database in macOS
Figure 11.PNG
Figure 11.  Wickr database in Windows.

You may have noticed the file name for both macOS and Windows is the same:  wickr_db.sqlite.  The similarities do not stop there.  Figure 12 shows the location of the database in macOS, Figure 13 shows the database’s location in Windows.

Figure 12
Figure 12.  Home in macOS.  ~/Users/%UserName%/Library/ApplicationSupport/Wickr, LLC/WickrMe/
Figure 13
Figure 13.  Home in Windows.  C:\Users\%UserName%\AppData\Local\Wickr, LLC\WickrMe\

As you can see, most of the file names in each home directory are the same.  Note that the last folder in the path, “WickrMe,” may be different depending on what version is installed on the device (Wickr Me, Wickr Pro, Enterprise), so just know the last hop in the path may not be exactly the same.

Interesting note about the “preferences” file in Windows:  it is not encrypted.  It can be opened, and doing so reveals quite a bit of octet data.  The field “auid” caught my attention, and while I have a theory about its value, I’ll save it for another blog post.

For Android, the directory and layout should look familar to those who examine Android devices.  The database file, wickr_db, sits in the databases folder.  See Figure 14.

Figure 14
Figure 14.  Home in Android.  /data/data/com.mywickr.wickr2

If you will recall, when a user unlocks a message it is actually decrypting it.  This also applies to files that are sent through Wickr.  Unlike messages, which are stored within the database, files, both encrypted and decrypted, reside in the Wickr portion of the file system.  When a message with a file unlocked, an encrypted version is created within the Wickr portion of the file system.  When the file is opened (not just unlocked), it is decrypted, and a decrypted version of that file is created within a different path within the Wickr portion of the file system.  Figure 15 shows the Android files, Figure 16 shows the macOS files, and Figure 17 shows the Windows files.  The top portion of each figure shows the files in encrypted format and the bottom portion of the figure shows the files in decrypted format.

Figure 15.png
Figure 15.  Encrypted/decrypted files in Android.
Figure 16
Figure 16.  Encrypted/decrypted files in macOS.
Figure 17
Figure 17.  Encrypted/decrypted files in Windows.

When a file is sent through Wickr it is given a GUID, and that GUID is consistent across devices for both the sender and the recipient(s).  In the figures above, Android represents Test Account 1 and macOS/Windows represents Test Account 2, so you will notice that the same GUIDs are seen on both accounts (all three platforms).

The fact an encrypted version of a file exists indicates a device received the file and the message was unlocked, but doesn’t necessarily indicate the file was opened.  It isn’t until the user chooses to open the file within the Wickr UI that a decrypted version is deposited onto the device as seen above.  An example of the open dialogue is seen in Figure 18.  The triple dots in the upper righthand corner of the message bubble invokes the menu.

Figure 18.png
Figure 18.  Open dialogue example (Windows UI).

If a picture is received and the message is unlocked, then a thumbnail is rendered within the Wickr UI message screen, as seen in Figure 18, but this doesn’t deposit a decrypted version of that picture; the user must open the file.  Any other file type, including vidoes, merely display the original file name in the Wickr UI.  A user will have to open the file in order to view its contents.

The directories for files on each platform is as follows (path starts in the Wickr home directory):

Platform                                   Encrypted Files                                    Decrypted Files


Android                                    ~/files/enc                                             ~/cache/dec

macOS                                      ~/temp/attachments                           ~/temp/preview

Windows                                 ~/temp/attachments                           ~/temp/preview

This behavior applies to files both sent and received.  Also keep in mind there is a probability that you may find encrypted files with no corresponding decrypted version.  This may be due to the message retention time expiring, which is why the order of examination is important, or it may mean the user never opened the file.

For both macOS and Windows, the only way to recover message content is via a manual examination using the Wickr UI, which means that a logical image should contain the sought after data.  However, the order of your examination can impact your ability to recover any decrypted files that may be present on the device.  Since the Wickr application is keeping track of what files may have passed their message retention period, it is extremely important to check for decrypted files prior to initiating Wickr on the device for a manual examination.  Failure to do so will result in any decrypted file whose message retention time has expired being deleted.

The Android database.  Slightly different

While the databases for macOS and Windows are inaccessible, the story is better for Android.  While conducting research for this post I discovered Cellebrite Physical Analyzer was not able to decrypt the wickr_db database even though it was prompting for a password.  Cellebrite confirmed Wickr had, in fact, changed their encryption scheme and Physical Analyzer was not able to decrypt the data.  A short time later they had a solution, which allowed me to proceed with this part of the post.  While not currently available to the public, this solution will be rolled out in a future version of Physical Analyzer.  Fortunately, I was granted pre-release access to this feature.

Again, thank you Heather and Or.  🙂

While there is still a good deal of data within the wickr_db file that is obfuscated, the important parts are available to the examiner, once decrypted.  The first table of interest is “Wickr_Message.”  See Figure 19.

Figure 19.png
Figure 19.  Wickr_Message table.

The blue box is the timestamp for sent and received message (Unix Epoch) and the orange box contains the text of the message or the original file name that was either sent or received by the device.  The timestamp in the red box is the time the message will be deleted from the Wickr UI and database.  The values in the purple box are inteteresting.  Based on testing, each file I sent or received had a value of 6000 in the messageType column.  While not that important here, these values are important when discussing iOS.

The blobs in the messagePayload column are interesting in that they contain a lot of information about file transfers between devices.  See Figure 20.

Figure 20.png
Figure 20.  Message payload.

The file sender can be seen next to the red arrow, the file type (e.g., picture, document, etc.) is next to the blue arrow, the GUID assigned to the file is seen in green box.  The GUID values can be matched up to the GUIDs of the files found in the /enc and /dec folders.  Here, the GUID in the green box in Figure 20 can be seen in both folders in Figure 21.  Finally, you can see the original name of the file next to the purple arrow (iOS_screenshot).  The original file name also appears in the cachedText colum in Figure 19.

Figure 21
Figure 21.  Matching GUIDs

The orange box in Figure 20 contains the recipients username along with the hash value of the recipient’s Wickr User ID.  That value can be matched up to the value in the senderUserIDHash column in the same table (see the red box in Figure 22).  The title of this column is deceptive, because it isn’t actually the userID that is represented.

Figure 21
Figure 22.  Sender’s hashed ID

Figure 23 shows the same hash in the serverIDHash column in the table Wickr_ConvoUser table.

Figure 23
Figure 23.  Same IDs.

Also of note in this table and the one seen in Figure 22 is the column vGroupID.  Based on testing, it appears every conversation is considered to be a “group,” even if that group only has two people.  For example, in my testing I only had my two test accounts that were conversing with each other.  This is considered a “group,” and is assigned a GUID (seen in the blue box).  The good thing about this value is that it is consistent across devices and platforms, which could come in handy when trying to track down conversation participants or deleted conversations (by recovering it from another device).  An example of this cross-platform-ing is seen in Figure 24, which shows the table ZSECEX_CONVO from the Wickr database in iOS.  Note the same GroupID.

Figure 24
Figure 24.  Same group ID.

Figure 25 shows, again, the serverIDHash, but this time in the table Wickr_User.  It is associated with the value userIDHash.  The value userAliasHash (the same table) is seen in Figure 26.

Figure 25
Figure 25.  serverIDHash and the userIDHash.
Figure 26
Figure 26.  userIDHash (Part 2).

Figure 27 shows some telemetry for the users listed in this table.

Figure 27
Figure 27.  User telemetry.

The columns isHidden (purple box) and lastMessaged (red box) are self-explanatory.  The value of 1 in the isHidden column means the user does not appear in the Conversations section of the UI.  That value coupled with the value of 0 in the lastMessaged column indicates this row in the table probably belongs to the logged in account.

The lastRefreshTime column (blue box) has the same value in both cells.  The timestamp in the cell for row 1 is when I opened the Wickr app, which, undoubtedly, caused the app to pull down information from the server about my two accounts.  Whether this is what this value actually represents requires more testing.  The same goes for the values in lastActivityTime column (orange box).  The value seen in the cell in row 1 is, based on my notes, the last time I pushed the app to the background.  The interesting thing here is there was activity within the app the after the timestamp (the following day around lunch time PDT).  More testing is required in order to determine what these values actually represent.  For now, I would not trust lastActivityTime at face value.

The table Wickr_Settings contains data of its namesake.  The first column of interest is appConfiguration (red box).  See Figure 28.

Figure 28.PNG
Figure 28.  Wick_Settings

The data in this cell is in JSON format.  Figure 29 shows the first part of the contents.

Figure 29
Figure 29.  JSON, Part 1.

There are two notable values here.  The first, in the blue box, is self explanatory:  locationEnabled (Wickr can use location services).  I let Wickr have access to location services during initial setup, so this value is set to ‘true.’  The value in the red box, alwaysReauthenticate, refers to the setting that determines whether or not a user has to login into Wicker each time the app is accessed.  It corresponds to the switch in the Wickr settings seen in Figure 30 (red box).

Figure 30.PNG
Figure 30.  Login each time?  No thanks.

Because I didn’t want to be bothered with logging in each time, I opted to just have Wickr save my password and login automatically each time, thus this value is set to ‘false.’  If a user has this set and does not provide the Wickr password, a manual examination will be impossible.

The rest of the contents of the JSON data are unremarkable, and is seen in Figure 31.

Figure 31
Figure 31.  JSON, Part 2.  Nothing much.

There are three additional columns that are notable in this table.  The first is the setting for the Secure Shredder, autoShredderEnabled.  This value is set to 1, which means that it is enabled.  I would not expect to see any other value in this cell as Secure Shredder runs automatically in Wickr Me and some tiers of the Wickr Pro versions; there is no way to disable it unless the Silver, Gold, or Enterprise versions of Wickr is present.  See Figure 32.

Figure 32
Figure 32.  Anonymous Notifications, Secure Shredder, and Auto Unlock.

The second notable column is unlockMessagesEnabled (red box).  As its name implies, this setting dictates whether a message is unlocked on receipt, or if a user has to manually initiate the unlock.  I took the default setting, which is not to unlock a received message (database value of 0).  Figure 33 shows the setting in the Wickr Settings UI.

Figure 33
Figure 33.  Message Auto Unlock switch.

Figure 32 also shows anonymousNotificationsEnabled (orange box).  This setting dictates whether Wickr notifications provide any specific information about a received message/file (e.g., sender’s user name, text of a message, file name), or if the notification is generic (e.g., “You have a new message”).  Again, the default is to show generic notifications (database value of 1).  Figure 34 shows the setting in the Wickr Settings UI.  Note the switch is off, but since I have Auto Unlocks disabled, this switch is not used because my messages are not automatically unlocked on receipt.

Figure 34.PNG
Figure 34.  Anonymous Notification setting.

I want to address one last table;  Wickr_Convo.  Using the conversation GUIDs, you can determine the last activity within each conversation that is stored on the device.  In Figure 35, this conversation GUID is the same as the ones seen in Figure 23 and 25.

Figure 35
Figure 35.  Conversations listed by GUID.

There are two values that are notable.  The first is is the lastOutgoingMessageTimestamp (red box). That is a pretty self-explanatory label, right?  Not quite, and examiners should be careful interpreting this value.  That same timestamp appears in the Wickr_Message table seen in Figure 37, but with a different label.

Figure 36
Figure 36.  Wickr_Convo table timestamps.
Figure 37.png
Figure 37.  Wickr_Message table timestamps.

It appears that the lastOutgoingMessageTimestamp from Wickr_Convo applies to the last message that did not involve a file transfer (value timestamp seen in the Wickr_Message table) .  The value lastUpdatedTimestamp (blue box in Figure 36) actually represents the last communication (message or file transfer) in the conversation, which is seen in the blue-boxed timestamp value in the Wickr_Message table (blue box in Figure 37).

The value messageReadTimestamp (orange box in Figure 36) represents the time the last message was unlocked.  Notice that the value is just about the same as that seen in lastUpdatedTimestamp, but with more granularity.

A couple more things

There are two more files I’d like to touch on with regards to the Android version.  The first is com.google.android.gms.measurement.prefs.xml found in the /shared_prefs folder.  See Figure 38.

Figure 38
Figure 38.  Measurement data for the app.

This file keeps track of certain data about app usage.  The most obvious data points are the install time for the app itself (orange box) and the first time the app was opened (red box).  The next two data poins are app_backgrounded (yellow box) and last_pause_time (green box).  The app_backgrounded value, as you can see, is a boolean value that indicates whether the app is active on the device screen or if the app is running in the background (i.e. not front-and-center on the device).  The value last_pause_time is the last time the app was pushed to the background by the user (“paused”).  If an examiner is pulling this data from a seized device is highly likely that the app_backgrounded value will be true, unless the device is seized and imaged while Wickr is actively being used.

The value in the blue box, last_upload, is a deceiving value, and I have yet to figure out what exactly it represents.  I have a theory that it may be the last time the app uploaded information about its current public key which is used in the asymmetric encryption operations during message transport, but I can not be totally sure at this point.  Just know that last_upload may not necessarily represent the last time a file was uploaded.

The last file is COUNTLY_STORE.xml.  Based on research, it appears this file may be used for analytical purposes in conjunction with the Countly platform.  This file keeps some metrics about the app, including the cell service carrier, platofrm version (SDK version), hardware information, and a unique identifier, which, on Android, is the advertising ID (adid).  The data appears to be broken up into transactions with each transaction containing some or all of the data points I just mentioned. Each transaction appears to be separated by triple colons.  Each also contains a timestamp.

A representitive example can be seen in Figure 39; it does not contain all of the data points I mentioned but it gives you a good idea as what to expect.

Figure 40
Figure 39.  COUNTLY_STORE.xml in Android.

This file is inconsistent.  On some of my extractions the file was empty after app use and on others it was full of data.  Sometimes the timestamps coincided with my being in the app, and others did not.  There does not seem to be enough consistency to definatively say the timestamps seen in this file are of any use to examiners.  If someone has found otherwise please let me know.

There is a iOS equivalent:  County.dat.  This file contains most of the same data points I already described, and while it has a .dat extension, it is a binary plist file.  In lieu of the adid (from Android), a deviceID is present in the form of a GUID.  I think this deviceID serves more than one purpose, but that is speculative on my part.

Speaking of iOS…

iOS is different. Of course it is.

The iOS version of Wickr behaves a little differently, probably due to how data is naturally stored on iOS devices.  The data is already encrypted, and is hard to access.  The two biggest differences, from a forensic standpoint, are the lack of decrypted versions of opened files, and the database is not encrypted.

Before I proceed any further, though, I do want to say thank you again to Mike Williamson for his help in understanding how the iOS app operates under the hood.  🙂

I searched high and low in my iOS extractions, and never found decrypted versions of files on my device.  So there are two possible explanations:  1, they are in a place I didn’t look (highly unlikely but not completely impossible), or 2, they are never created in the first place.  I’m leaning towards the latter.  Regardless, there are no decrypted files to discuss.

Which leaves just the database itself.  While it is not encrypted, a majority of the data writtent to the table cells is encrypted.  I will say I am aware of at least two mobile device forensic vendors, who shall not be named at this time, that will probably have support for Wickr on iOS in the near future.  In the meantime, though, we are left with little data to review.

The first table is ZWICKR_MESSAGE, and, as you can guess, it contains much of the same data as the Wickr_Message table in Android.  Remember when I mentioned the messageType value in Android?  In iOS that value is ZFULLTYPE.  See Figure 40.

Figure 38
Figure 40.  Type 6000.

The value of 6000 is seen here, and, as will be seen shortly, correspond to files that have been sent/received.  Also, note the Z_PK values 8 and 10, respectively, because they will be seen in another table.

Figure 41 shows some additional columns, the titles of which are self-explanatory.  One I do want to highlight, though, is the ZISVISIBLE column.  The two values in red boxes represent messages I deleted while within the Wickr UI.  There is a recall function in Wickr, but I was not able to test this out to see if this would also place a value of 0 in this column.

Figure 39.png
Figure 41.  Deleted message indicators.

Figure 42 shows another set of columns in the same table.  The columns ZCONVO and Z4_CONVO actually come from a different table within the database, ZSECEX_CONVO.  See Figures 42 and 43.

Figure 40.png
Figure 42.  Conversation and Calls.
Figure 41
Figure 43.  ZSECX_CONVO table.

In Figure 42 the two columns highlighted in the orange box, ZLASTCALLCONVO and Z4_LASTCALLCOVO, appear to keep track of calls made via Wickr; in my case these are audio calls.  Here, the value indicates the last call to take place, and what conversation it occured in.  This is interesting since the Android database did not appear to keep track of calls as far as I could tell (the data may have been encrypted).  Remember, this table is equivalent to the Wickr_ConvoUser table in the Android database, so you will be able to see the ZVGROUPID, shortly.

The next bit of the table involves identifying the message sender (ZUSERSENDER), the timestamp of the message (ZTIMESTAMP), the time the message will expire (ZCLEANUPTIME), and the message identifier (ZMESSAGEID).  The timestamps in this table are stored in Core Foundation Absolute Time (CFAbsolute).  See Figure 44.

Figure 42.png
Figure 44.  Messages and their times.

The values in the ZUSERSENDER column can be matched back to the Z_PK column in the ZSECX_USER table.

That’s it for this table!  The rest of the contents, including the ZBODY column, are encrypted.

The ZSECX_CONVO table has some notable data as seen in Figure 45.  The one column I do want to highlight is ZLASTTIMESTAMP, which is the time of the last activity (regardless of what it was) in the conversation (the “group”).  Interestingly, the times here are stored in Unix Epoch.

Figure 43.png
Figure 45.  Last time of activity in a conversation (group).

Figure 46 shows some additional data.  The last conversation in which a call was either placed or received is seen in the column ZLASTCALLMSG (orange box – timestamp can be gotten from the ZWICKR_MESSAGE table), along with the last person that either sent/received anything within the conversation  (ZLASTUSER – red box). The value in the ZLASTCALLMSG column can be matched back to the values in the Z_PK column in the ZWICKR_MESSAGE table.  The value in the ZLASTUSER column can be matched back to the Z_PK column in the ZSECX_USER table. And, finally, as I previously showed in Figure 24, the ZVGROUPID (blue box).

Figure 44
Figure 46.  The last of the ZSECX_CONVO table.

The table ZSECEX_USER, as seen in Figures 47 and 48, contains data about not only the account owner, but also about users who the account holder may be conversing with.  The table contains some of the same information as the Wickr_User table in Android.  In fact, Figure 47 looks very similar to Figure 27.  The values represent the same things as well.

Figure 47
Figure 47.  Hidden status and last activity time.

Figure 48 shows the same items as seen in Figure 26, but, as you can see, the hash values are different, which makes tracking conversation participants using this informaiton impossible.

Figure 48
Figure 48.  Same participants, different hashes.

File transfers in iOS are a bit tricky because some of the data is obfuscated, and in order to figure out which file is which an examiner needs to examine three tables:  Z_11MSG, ZWICKR_MESSAGE, and ZWICKR_FILE.  Figure 49 shows the Z_11MSG table.

Figure 49
Figure 49.  Z_11MSG.

The colum Z_13MSG refers to the ZWICKR_MESSAGE table, with the values 8 and 10 referring to values in the Z_PK column in that table.  See Figure 50.

Figure 50
Figure 50.  Trasferred files.

Obviously, associated timestamps are found in the same row further into the table.  See Figure 51.

Figure 51
Figure 51.  Timestamps for the transferred files.

The column Z_11Files in Figure 49 refers to the ZWICKR_FILE table.  See Figure 52.

Figure 52
Figure 52.  Files with their GUIDs.

The values in Z_11FILES column in Figure 49 refer the values in the Z_PK values seen in Figure 52.  Figure 53 shows the files within the file system.  As I previously mentioned, there are no decrypted versions of these files.

Figure 53
Figure 53.  The file GUIDs from the database table.

Figure 54 shows values ZANONYMOUSNOTIFICATION and ZAUTOUNLOCKMESSAGES values  from the ZSECEX_ACCOUNT table (the Android values were seen in Figure 32).  Both values here are zero meaning I had these features turned off.

Figure 54
Figure 54.  Anonymous Notification and Auto Unlock settings in iOS.

The last table I want to highlight is the ZSECX_APP table.  See Figure 55.

Figure 55
Figure 55.  Users and their associated app installations.

The values in the ZUSER column relate back to the values seen in the Z_PK column in the ZWICKR_USER table.  Each different value in the ZAPPIDHASH represents a different app install on a device.  For example, Test Account 1 appeared on four different devices (iPhone, iPad, Windows, macOS).  This means four different devices each with their own individual installation of Wickr, which translates to a different ZAPPIDHASH value for each individual device.  Knowing a user has multiple devices could be beneficial.  Warning:  be careful, because this isn’t the only way to interpret this data.

As part of the testing, I wanted to see if this value could change on a device, and, as it turns out, it can.  Test Account 2 was only logged in on the Pixel 3.  I installed the app, used it, pulled the data, wiped the Pixel and flashed it with a new install of Android, and then reinstalled Wickr.  I repeated those steps one more time, which means Wickr was installed on the same device three different times, and, as you can see, there are three different hash values for ZUSER 2 (Test Account 2).

The morale of this story is that while this value can possibly represent different devices where a user may be logged in, it actually represents instances of app installation, so be careful in your interpretation.

Conclusion

Wickr is a tough one.  This app presents all sorts of forensic challenges.  At the moment there is very little data that is recoverable, but some insights about communication and app usage can be gleaned from what little data is available.  Sometimes, files can be recovered, and that may be all an examiner/investigator needs.

The good news is, though, there is help on the horizon.

Two Snaps and a Twist – An In-Depth (and Updated) Look at Snapchat on Android

 

There is an update to this post. It can be found after the ‘Conclusion’ section.

I was recently tasked with examining a two-year old Android-based phone which required an in-depth look at Snapchat. One of the things that I found most striking (and frustrating) during this examination was the lack of a modern, in-depth analysis of the Android version of the application beyond the tcspahn.db file, which, by the way, doesn’t exist anymore, and the /cache folder, which isn’t really used anymore (as far as I can tell). I found a bunch of things that discussed decoding encrypted media files, but this information was years old (Snapchat 5.x). I own the second edition of Learning Android Forensics by Skulkin, Tyndall, and Tamma, and while this book is great, I couldn’t find where they listed the version of Snapchat they examined or the version of Android they were using; what I found during my research for this post did not really match what was written in their book. A lot of things have changed.

Googling didn’t seem to help either; I just kept unearthing the older research. The closest I got was a great blog post by John Walther that examined Snapchat 10.4.0.54 on Android Marshmallow. Some of John’s post lined up with what I was seeing, while other parts did not.

WHAT’S THE BIG DEAL?

Snapchat averages 190 million users daily, which is just under half of the U.S. population, and those 190 million people send three billion snaps (pictures/videos) daily. Personally, I have the app installed on my phone, but it rarely sees any usage. Most of the time I use it on my kid, who likes the filters that alter his voice or requires that he stick out his tongue. He is particularly fond of the recent hot dog filter.

One of the appealing things about Snapchat is that direct messages (DMs) and snaps disappear after a they’re opened. While the app can certainly be used to send silly, ephemeral pictures or videos, some people find a way to twist the app for their own nefarious purposes.

There has been plenty written in the past about how some traces of activity are actually recoverable, but, again, nothing recent. I was surprised to find that there was actually more activity-related data left behind than I thought.

Before we get started just a few things to note (as usual). First, my test data was generated using a Pixel 3 running Android 9.0 (Pie) with a patch level of February 2019. Second, the version of Snapchat I tested is 10.57.0.0, which was the most current version as of 05/22/2019. Third, while the phone was not rooted, it did have TWRP, version 3.3.0-0, installed. Extracting the data was straight forward as I had the Android SDK Platform tools installed on my laptop. I booted into TWRP and then ran the following from the command line:

adb pull /data/data/com.snapchat.android

That’s it. The pull command dropped the entire folder in the same path as where the platform tools resided.

As part of this testing, I extracted the com.snapchat.android folder five different times over a period of 8 days as I wanted to see what stuck around versus what did not. I believe it is also important to understand the volatility of the data that is provided in this app. I think understanding the volatility will help investigators in the field and examiners understand exactly how much time, if any, they have before the data they are seeking is no longer available.

I will add that I tested two tools to see what they could extract: Axiom (version 3.0) and Cellebrite (UFED 4PC 7.18 and Physical Analyzer 7.19). Both tools failed to extract (parsing not included) any Snapchat data. I am not sure if this is a symptom of these tools (I hope not) or my phone. Regardless, both tools extracted nothing.

TWO SNAPS AND…SOME CHANGE

So, what’s changed? Quite a bit as far as I can tell. The storage location of where some of the data that we typically seek has changed. There are enough changes that I will not cover every single file/folder in Snapchat. I will just focus on those things that I think may be important for examiners and/or investigators.

One thing has not changed: the timestamp format. Unless otherwise noted, all timestamps discussed are in Unix Epoch.

The first thing I noticed is that the root level has some new additions (along with some familiar faces). The folders that appear to be new are “app_textures”, “lib”, and “no_backup.” See Figure 1.

Figure 1. Root level of the com.snapchat.android folder.

The first folder that may be of interest is one that has been of interest to forensicators and investigators since the beginning: “databases.” The first database of interest is “main.db.” This database replaces tcspahn.db as it now contains a majority of user data (again, tcspahn.db does not exist anymore). There is quite a bit in here, but I will highlight a few tables. The first table is “Feed.” See Figure 2.

Figure 2. The Feed.

This table contains the last action taken in the app. Specifically, the parties involved in that action (seen in Figure 2), what the action was, and when the action was taken (Figure 3). In Figure 4 you can even see which party did what. The column “lastReadTimestamp” is the absolute last action, and the column “lastReader” show who did that action. In this instance, I had sent a chat message from Fake Account 1 (“thisisdfir”) to Fake Account 2 (“hickdawg957”) and had taken a screenshot of the conversation using Fake Account 1. Fake Account 2 then opened the message.

Enter aFigure 3. Last action. caption

Figure 4. Who did what?
The second table is “Friend.” This table contains anyone who may be my friend. The table contains the other party’s username, user ID, display name, the date/time I added that person as a friend (column “addedTimestamp”), and the date/time the other person added me as a friend (column “reverseAddedTimestamp”). Also seen is any emojis that may be assigned to my friends. See Figures 5, 6, and 7.

Figure 5. Username, User ID, & Display Name.
Figure 6. Friendmojis (Emojis added to my Friends.

Figure 7. Timestamps for when I added friends and when they added me.

Note that the timestamps are for when I originally added the friend/the friend added me. The timestamps here translate back to dates in November of 2018, which is when I originally created the accounts during the creation of my Android Nougat image.

One additional note here. Since everyone is friends with the “Team Snapchat” account, the value for that entry in the “addedTimestamp” column is a good indicator of when the account you’re examining was created.

The next table is a biggie: Messages. I will say that I had some difficulty actually capturing data in this table. The first two attempts involved sending a few messages back and forth, letting the phone sit for a 10 or so minutes, and then extracting the data. In each of those instances, absolutely NO data was left behind in this table.

In order to actually capture the data, I had to leave the phone plugged in to the laptop, send some messages, screenshot the conversation quickly, and then boot into TWRP, which all happened in under two minutes time. If Snapchat is deleting the messages from this table that quickly, they will be extremely hard to capture in the future.

Figure 8 is a screenshot of my conversation (all occurred on 05/30/2019) taken with Fake Account 1 (on the test phone) and Figure 9 shows the table entries. The messages on 05/30/2019 start on Row 6.

Figure 8. A screenshot of the conversation.

Figure 9. Table entries of the conversation.

The columns “timestamp” and “seenTimestamp” are self-explanatory. The column “senderId” is the “id” column from the Friends table. Fake Account 1 (thisisdfir) is senderId 2 and Fake Account 2 (hickdawg957) is senderId 1. The column “feedRowId” tells you who the conversation participants are (beyond the sender). The values link back to the “id” column in the Feed table previously discussed. In this instance, the participants in the conversation are hickdawg957 and thisisdifr.

In case you missed it, Figure 8 actually has two saved messages between these two accounts from December of 2018. Information about those saved messages appear in Rows 1 and 2 in the table. Again, these are relics from previous activity and were not generated during this testing. This is an interesting find as I had completely wiped and reinstalled Android multiple times on this device since the those messages were sent, which leads me to speculate these messages may be saved server-side.

In Figure 10, the “type” column is seen. This column shows the type of message was transmitted. There are three “snap” entries here, but, based on the timestamps, these are not snaps that I sent or received during this testing.

Figure 10. The “types” of messages.
After the “type” column there is a lot of NULL values in a bunch of columns, but you eventually get to the message content, which is seen in Figure 11. Message content is stored as blob data. You’ll also notice there is a column “savedStates.” I am not sure exactly what the entries in the cells are referring to, but they line up with the saved messages.

Figure 11. Message (blob) content.

In Figure 12, I bring up one of the messages that I recently sent.

Figure 12. A sample message.

The next table is “Snaps.” This table is volatile, to say the least. The first data extraction I performed was on 05/22/2019 around 19:00. However, I took multiple pictures and sent multiple snaps on 05/21/2019 around lunch time and the following morning on 05/22/2019. Overall, I sent eight snaps (pictures only) during this time. Figure 13. Shows what I captured during my first data extraction.

Figure 13. I appear to be messing some snaps.
Of the eight snaps that I sent, only six appear in the table. The first two entries in the table pre-date when I started the testing (on 05/21/2019), so those entries are out (they came from Team Snapchat). The first timestamp is from the first snap I sent on 05/22/2019 at 08:24. The two snaps from 05/21/2019 are not here. So, within 24 hours, the data about those snaps had been purged.

On 05/25/2019 I conducted another data extraction after having received a snap and sending two snaps. Figure 14 shows the results.

Figure 14. A day’s worth of snaps.
The entries seen in Figure 13 (save the first two) are gone, but there are two entries there for the snaps I sent. However, there is no entry for the snap I received. I checked all of the tables and there was nothing. I received the snap at 15:18 that day, and performed the extraction at 15:51. Now, I don’t know for sure that a received snap would have been logged. I am sure, however, that it was not there. There may be more testing needed here.

Figure 15 shows the next table, “SendToLastSnapRecipients.” This table shows the user ID of the person I last sent a snap to in the “key” column, and the time at which I sent said snap.

Figure 15. The last snap recipient.

MEMORIES

During the entire testing period I took a total of 13 pictures. Of those 13, I saved 10 of them to “Memories.” Memories is Snapchat’s internal gallery, separate from the phone’s Photos app. After taking a picture and creating an overlay (if desired), you can choose to save the picture, which places it in Memories. If you were to decide to save the picture to your Photos app, Snapchat will allow you to export a copy of the picture (or video).

And here is a plus for examiners/investigators: items placed in Memories are stored server-side. I tested this by signing into Fake Account 1 from an iOS device, and guess what…all of the items I placed in Memories on the Pixel 3 appeared on the iOS device.

Memories can be accessed by swiping up from the bottom of the screen. Figure 16 shows the Snapchat screen after having taken a photo but before snapping (sending) it. Pressing the area in the blue box (bottom left) saves the photo (or video) to Memories. The area in the red box (upper right) are the overlay tools.

Figure 16. The Snapchat screen.

Figure 17 shows the pictures I have in my Memories. Notice that there are only 9 pictures (not 10). More on that in a moment.

Figure 17. My memories. It looks like I am short one picture.

The database memories.db stores relevant information about files that have been saved to Memories. The first table of interest is “memories_entry.” This table contains an “id,” the “snap_id,” and the date the snap was created. There are two columns regarding the time: “created_time” and “latest_created_time.” In Figure 18 there is a few seconds difference between the values in some cells in the two columns, but there are also a few that are the same value. In the cells where there are differences, the differences are negligible.

There is also a column titled “is_private” (seen in Figure 19). This column refers to the My Eyes Only (MEO) feature, which I will discuss shortly. For now, just know that the value of 1 indicates “yes.”

Figure 18. Memories entries.

Figure 19. My Eyes Only status.

(FOR) MY EYES ONLY

I have been seeing a lot of listserv inquires as of late regarding MEO. Cellebrite recently added support for MEO file recovery in Android as of Physical Analyzer 7.19 (iOS to follow), and, after digging around in the memories database, I can see why this would be an issue.

MEO allows a user to protect pictures or videos with a passcode; this passcode is separate from the user’s password for their Snapchat account. A user can opt to use a 4-digit passcode, or a custom alphanumeric passcode. Once a user indicates they want to place a media file in MEO, that file is moved out of the Memories area into MEO (it isn’t copied to MEO).

MEO is basically a private part of Memories. So, just like everything else in Memories, MEO items are also stored server-side. I confirmed this when I signed in to Fake Account 1 from the iOS device; the picture I saved to MEO on the Pixel 3 appeared in MEO on the iOS device. The passcode was the same, too. Snapchat says if a user forgets the passcode to MEO, they cannot help recover it. I’m not sure how true that is, but who knows.

If you recall, I placed 10 pictures in Memories, but Figure 17 only showed 9 pictures. That is because I moved one picture to MEO. Figure 20 shows my MEO gallery.

Figure 20. MEO gallery.

In the memories database, the table “memories_meo_confidential” contains entries about files that have been placed in MEO. See Figure 21.

Figure 21. MEO table in the memories database.

This table contains a “user_id,” the hashed passcode, a “master_key,” and the initialization vector (“iv”). The “master_key” and “initialization vector” are both stored in base64. And, the passcode….well, it has been hashed using bcrypt (ugh). I will add that Cellebrite reports Physical Analyzer 7.19 does have support for accessing MEO files, and, while I did have access to 7.19, I was not able to tell if it was able to access my MEO file since it failed to extract any Snapchat data.

The “user_id” is interesting: “dummy.” I have no idea what that is referring to, and I could not find it anywhere else in the data I extracted.

The next table is “memories_media.” This table. Does have a few tidbits of interesting data: another “id,” the size of the file (“size”), and what type of file (“format”). Since all of my Memories are pictures, all of the cells show “image_jpeg.” See Figures 22 and 23.

Figure 22. “memories_media.”

Figure 23. “memories_media,” part 2.

The next table is “memories_snap.” This table has a lot of information in about my pictures, and brings together data from the other tables in this database. Figure 24 shows a column “media_id,” which corresponds to the “id” in the “memories_media” table discussed earlier. There is also a “creation_time” and “time_zone_id” column. See Figure 24.

Figure 24. id, media_id, creation_time, and time zone.

Figure 25 shows the width and height of the pictures. Also note the column “duration.” The value is 3.0 for each picture. I would be willing to be that number could be higher or lower if the media were videos.

Figure 25 also shows the “memories_entry_id,” which corresponds to the “id” column in the “memories_entry” table. There is also a column for “has_location.” Each of the pictures I placed in Memories has location data associated with it (more on that in a moment).

Figure 25. Picture size, another id, and a location indicator.

Figure 26 is interesting as I have not been able to find the values in the “external_id” or “copy_from_snap_id” columns anywhere.

Figure 26. No clue here.

The data seen in Figure 27 could be very helpful in situations where an examiner/investigator thinks there may be multiple devices in play. The column “snap_create_user_agent” contains information on what version of Snapchat created the the snap, along with the Android version and, in my case, my phone model.

Figure 27. Very helpful.

The column “snap_capture_time” is the time I originally took the picture and not the time I sent the snap.

Figure 28 shows information about the thumbnail associated with each entry.

Figure 28. Thumbnail information.

Figure 29 is just like Figure 27 in its level of value. It contains latitude and longitude of the device when the picture was taken. I plotted each of these entries and I will say that the coordinates are accurate +/- 10 feet. I know the GPS capabilities of every device is different, so just be aware that your mileage may vary.

Figure 29. GPS coordinates!!

Figure 29 also has the column “overlay_size.” This is a good indication if a user has placed an overlay in the picture/video. Overlays are things that are placed in a photo/video after it has been captured. Figure 30 shows an example of an overlay (in the red box). The overlay here is caption text.

Figure 30. An overlay example.

If the value in the overlay_size column is NULL that is a good indication that no overlay was created.

Figure 31 shows the “media_key” and “media_iv,” both of which are in base64. Figure 32 shows the “encrypted_media_key” and “encrypted_media_iv” values. As you can see there is only one entry that has values for these columns; that entry is the picture I placed in MEO.

Figure 31. More base64.

Figure 32. Encrypted stuff.

The next table that may be of interest is “memories_remote_operation.” This shows all of the activity taken within Memories. In the “operation” column, you can see where I added the 10 pictures to Memories (ADD_SNAP_ENTRY_OPERATION). The 11th entry, “UPDATE_PRIVATE_ENTRY_OPERATION,” is where I moved a picture into MEO. See Figure 33.

Figure 33. Remote operations.

The column “serialized_operation” stores information about the operation that was performed. The data appears to be stored in JSON format. The cell contains a lot of the same data that was seen in the “memories_snap” table. I won’t expand it here, but DB Browser for SQLite does a good job of presenting it.

Figure 34 shows a better view of the column plus the “created_timestamp” column. This is the time of when the operation in the entry was performed.

Figure 34. JSON and a timestamp for the operation.

Figure 35 contains the “target_entry” column. The values in these columns refer to the “id”column in the “memories_entry” table.

Figure 35. Operation targets.

To understand the next database, journal, I first have to explain some additional file structure of the com.snapchat.android folder. If you recall all the way back to Figure 1, there was a folder labeled “files.” Entering that folder reveals the folders seen in Figure 36. Figure 37 shows the contents of the “file_manager” folder.

Figure 36. “Files” structure.

Figure 37. file_manager.

The first folder of interest here is “media_package_thumb,” the contents of which can be seen in Figure 38.

Figure 38. Thumbnails?

Examining the first file here in hex finds a familiar header: 0xFF D8 FF E0…yoya. These things are actually JPEGs. So, I opened a command line in the folder, typed ren *.* *.jpg and BAM: pictures! See Figure 39.

Figure 39. Pictures!

Notice there are a few duplications here. However, there are some pictures here that were not saved to memories and were not saved anywhere else. As an example, see the picture in Figure 40.

Figure 40. A non-saved, non-screenshot picture.
Figure 40 is a picture of the front of my employer’s building. For documentation purposes, I put a text overlay in the picture with the date/time I took it (to accompany my notes). I then snapped this picture to Fake Account 2, but did not save it to Memories, did not save it to my Photos app, and did not screenshot it. However, here it is, complete with the overlay. Now, while this isn’t the original picture (it is a thumbnail) it can still be very useful; one would need to examine the “snap” table in the main database to see if there was any activity around the MAC times for the thumbnail.

The next folder of interest is the “memories_media” folder. See Figure 41.

Figure 41. Hmm…

There are 10 items here. These are also JPEGs. I performed the same operation here as I did in the “media_package_thumb” folder and got the results seen in Figure 42.

Figure 42. My Memories, sans overlays.

These are the photographs I placed in Memories, but the caption overlays are missing. The picture that is MEO is also here (the file staring with F5FC6BB…). Additionally, these are high resolution pictures.

You may be asking yourself “What happened to the caption overlays?” I’m glad you asked. They are stored in the “memories_overlay” folder. See Figure 43.

Figure 43. My caption overlays.

Just like the previous two folders, these are actually JPEGs. I performed the rename function, and got the results seen in Figure 44. Figure 45 shows the overlay previously seen in Figure 30.

Figure 44. Overlays.

Figure 45. The Megaman overlay from Figure 30.

The folder “memories_thumbnail” is the same as the others, except it contains just the files in Memories (with the overlays). For brevity’s sake, I will just say the methodology to get the pictures to render is the same as before. Just be aware that while I just have pictures in my Memories, a user could put videos in there, too, so you could have a mixture of media. If you do a mass-renaming, and a file does not render, the file extension is probably wrong, so adjust the file extension(s) accordingly.

Now that we have discussed those file folders, let’s get back to the journal database. This database keeps track of everything in the “file_manager” directory, including those things we just discussed. Figure 46 shows the top level of the database’s entries.

Figure 46. First entries in the journal database.

If I filter the “key” column using the term “package” from the “media_package_thumb” folder (the “media_package_thumb.0” files) I get the results seen in Figure 47.

Figure 47. Filtered results.

The values in the “key” column are the file names for the 21 files seen in Figure 38. The values seen in the “last_update_time” column are the timestamps for when I took the pictures. This is a method by which examiners/investigators could potentially recover snaps that have been deleted.

WHAT ELSE IS THERE?

As it turns out, there are a few more, non-database artifacts left behind which are located in the “shared_prefs” folder seen in Figure 1. The contents can be seen in Figure 48.

Figure 48. shared_prefs contents.

The first file is identity_persistent_store.xml seen in Figure 49. The file contains the timestamp for when Snapchat was installed on the device (INSTALL_ON_DEVICE_TIMESTAMP), when the first logon occurred on the device (FIRST_LOGGED_IN_ON_DEVICE_TIMESTAMP), and the last user to logon to the device (LAST_LOGGED_IN_USERNAME).

Figure 49. identity_persistent_store.xml.

Figure 50. shows the file LoginSignupStore.xml. it contains the username that is logged in.

Figure 50. Who is logged in?

The file user_session_shared_pref.xml has quite a bit of account data in it, and is seen in Figure 51. For starters, it contains the display name (key_display_name), the username (key_username), and the phone number associated with the account (key_phone).

The value “key_created_timestamp” is notable. This time stamp converts to November 29, 2018 at 15:13:34 (EST). Based on my notes from my Nougat image, this was around the time I established Fake Account 1, which was used in the creation of the Nougat image. This might be a good indicator of when the account was established, although, you could always get that data from serving Snapchat with legal process.

Rounding it out is the “key_user_id” (seen in the Friends table of the main database) and the email associated with the account (key_email).

Figure 51. user_session_shared_pref.xml

CONCLUSION

Snapchat’s reputation proceeds it very well. I have been in a few situations where examiners/investigators automatically threw up their hands and gave up after having been told that potential evidence was generated/contained in Snapchat. They wouldn’t even try. I will say that while I always have (and will) try to examine anything regardless of what the general concensus is, I did share a bit of others’ skepticism about the ability to recover much data from Snapchat. However, this exercise has shown me that there is plenty of useful data left behind by Snapchat that can give a good look into its usage.

Update

Alexis Brignoni over at Initialization Vectors noticed that I failed to address something in this post. First, thanks to him for reading and contacting me. 🙂 Second, he noticed that I did not address Cellebrite Physical Analyzer’s (v 7.19) and Axiom’s (v 3.0) ability to parse my test Snapchat data (I addressed the extraction portion only).

We both ran the test data against both tools and found both failed to parse any of the databases. Testing found that while Cellebrite found the pictures I describe in this post, it did not apply the correct MAC times to them (from the journal.db). Axiom failed to parse the databases and failed to identify any of the pictures.

This is not in any way shape or form a knock on or an attempt to single out these two tools; these are just the tools to which I happen to have access. These tools work, and I use them regularly. The vendors do a great job keeping up with the latest developments in both the apps and the operating systems. Sometimes, though, app developers will make a hard turn all of a sudden, and it does take time for the vendors to update their tools. Doing so requires R&D and quality control via testing, which can take a while depending on the complexity of the update.

However, this exercise does bring to light an important lesson in our discipline, one that bears repeating: test and know the limitations of your tools. Knowing the limitations allows you to know when you may be missing data/getting errant readings. Being able to compensate for any shortcomings and manually examine the data is a necessary skillset in our discipline.

Thank you Alexis for the catch and assist!

Ridin’ With Apple CarPlay

I have been picking on Google lately.  In fact, all of my blog posts thus far have focused on Google things.  Earlier this year I wrote a blog about Android Auto, Google’s solution for unifying telematic user interfaces (UIs), and in it I mentioned that I am a daily CarPlay driver.  So, in the interest of being fair, I thought I would pick on Apple for a bit and take a look under the hood of CarPlay, Apple’s foray into automotive telematics.

Worldwide, 62 different auto manufacturers make over 500 models that support CarPlay.  Additionally, 6 after-market radio manufacturers (think Pioneer, Kenwood, Clarion, etc.) support CarPlay.  In comparison, 41 auto manufacturers (again, over 500 models – this is an increase since my earlier post) and 19 after-market radio manufacturers support Android Auto.  CarPlay runs on iPhone 5 and later.  It has been a part of iOS since its arrival (in iOS 7.1), so there is no additional app to download (unlike Android Auto).  A driver simply plugs the phone into the car (or wirelessly pairs it if the car supports it) and drives off; a wired connection negates the need for a Bluetooth connection.  The toughest thing about CarPlay setup is deciding how to arrange the apps on the home screen.

In roughly 5 years’ time CarPlay support has grown from 3 to 62 different auto manufacturers.  I can remember shopping for my 2009 Honda (in 2012) and not seeing anything mentioned about hands-free options.  Nowadays, support for CarPlay is a feature item in a lot of car sales advertisements.  With more and more states enacting distracted driving legislation, I believe using these hands-free systems will eventually become mandatory.

Before we get started, let’s take a look at CarPlay’s history.

Looking in the Rearview Mirror

The concept of using an iOS device in a car goes back further than most people realize.  In 2010 BMW announced support for iPod Out, which allowed a driver to use their iPod via an infotainment console in select BMW & Mini models.

iPod Out-1
Figure 1.  iPod Out.  The great-grandparent of CarPlay.

iPod Out-2
Figure 2.  iPod Out (Playback).

The iPod connected to the car via the 30-pin to USB cable, and it would project a UI to the screen in the car.  iPod Out was baked in to iOS 4, so the iPhone 3G, 3GS, 4, and the 2nd and 3rd generation iPod Touches all supported it.  While BMW was the only manufacturer to support iPod Out, any auto manufacturer could have supported it; however, it just wasn’t widely advertised or adopted.

In 2012 Siri Eyes Free was announced at WWDC as part of iOS 6.  Siri Eyes Free would allow a user to summon Siri (then a year old in iOS) via buttons on a steering wheel and issue any command that one could normally issue to Siri.  This differed from iPod Out in that there was no need for a wired-connection.  The car and iOS device (probably a phone at this point) utilized Bluetooth to communicate.  The upside to Siri Eyes Free, beyond the obvious safety feature, was that it could work with any in-car system that could utilize the correct version of the Bluetooth Hands-Free Profile (HFP).  No infotainment center/screen was necessary since it did not need to project a UI.  A handful of auto manufacturers signed on, but widespread uptake was still absent.

At the 2013 WWDC Siri Eyes Free morphed in to iOS in the Car, which was part of iOS 7.  iOS in the Car can be thought of as the parent of CarPlay, and closely resembles what we have today.  There were, however, some aesthetic differences, which can be seen below.

HomeScreen
Figure 3.  Apple’s Eddy Cue presenting iOS in the Car (Home screen).

iOS-in-the-Car-integration-Chevy-Spark-MyLink-720x340
Figure 4.  Phone call in iOS in the Car.

dims
FIgure 5.  Music playback in iOS in the Car.

Screen Shot 2013-06-10 at 12.59.52 PM
Figure 6.  Getting directions.

Screen Shot 2013-06-10 at 2.09.12 PM
Figure 7.  Navigation in iOS in the Car.

iOS in the Car needed a wired connection to the vehicle, or so was the general thought at the time.  During the iOS 7 beta, switches were found indicating that iOS in the Car could, potentially, operate over a wireless connection, and there was even mention of it possibly leveraging AirPlay (more on that later in this post).  Unfortunately, iOS in the Car was not present when iOS 7 was initially released.

The following spring Apple presented CarPlay, and it was later released in iOS 7.1.  At launch there were three auto manufactures that supported it:  Ferrari, Mercedes-Benz, and Volvo.  Personally, I cannot afford cars from any of those companies, so I am glad more manufacturers have added support.

CarPlay has changed very little since its release.  iOS 9 brought wireless pairing capabilities to car models that could support it, iOS 10.3 added recently used apps to the upper left part of the screen, and iOS 12 opened up CarPlay to third party navigation applications (e.g. Google Maps and Waze).  Otherwise, CarPlay’s functionality has stayed the same.

With the history lesson now over, there are a couple of things to mention.  First, this research was conducted using my personal phone, an iPhone XS (model A1920) running iOS 12.2 (build 16E227).  So, while I do have data sets, I will not be posting them online as I did with the Android Auto data.  If you are interested in the test data, contact me through the blog site and we’ll talk.

Second, at least one of the files discussed (the cache file in the locationd path) is in a protected area of iPhone, so there are two ways you can get to it:  jailbreaking iPhone or using a “key” with a color intermediate between black and white. The Springboard and audio data should be present in an iTunes backup or in an extraction from your favorite mobile forensic tool.

Let’s have a look around.

Test Drive

I have been using CarPlay for the past two and a half years.  A majority of that time was with an after-market radio from Pioneer (installed in a 2009 Honda), and the last six months have been with a factory-installed display unit in a 2019 Nissan.  One thing I discovered is that there are some slight aesthetic differences in how each auto manufacturer/after-market radio manufacturer visually implements CarPlay, so your visual mileage may vary.  However, the functionality is the same across the board.  CarPlay works just like iPhone.

Figure 8 shows the home screen of CarPlay.

IMG_0769 2
Figure 8.  CarPlay’s home screen.

The home screen looks and operates just like iPhone, which was probably the idea.  Apple did not want users to have a large learning curve when trying to use CarPlay.  Each icon represents an app, and the apps are arranged in rows and columns.  Unlike iPhone, creating folders is not an option, so it is easy to have multiple home screens. The icons are large enough to where not much fine motor skill is necessary to press one, which means you probably won’t be hunting for or pressing the wrong app icon very often.

The button in the orange box is the home button.  It is persistent across the UI, and it works like the iPhone home button:  press it while anywhere and you are taken back to the home screen.  The area in the blue box indicates there are two home screens available, and the area in the red box shows the most recently used apps.

Most of the apps should be familiar to iPhone users, but there is one that is not seen on iPhone:  the Now Playing app.  This thing is not actually an app…it can be thought of more like a shortcut.  Pressing it will bring up whatever app currently has control of the virtual sound interface of CoreAudio (i.e. whatever app is currently playing or last played audio if that app is suspended in iPhone’s background).

Swiping left, shows my second home screen (Figure 9).  The area in the red box is the OEM app.  If I were to press it, I would exit the CarPlay UI and would return to Nissan Connect (Nissan’s telematic system); however, CarPlay is still running in the background.  The OEM app icon will change depending on the auto maker.  So, for example, if you were driving a Honda, this icon would be different.

IMG_0771 1.jpg
Figure 9.  The second batch of apps on the second home screen.

A user can arrange the apps any way they choose and there are two ways of doing this, both of which are like iPhone.  The first way is to press and hold an app on the car display unit, and then drag it to its desired location.  The second way is done from the screen seen in Figure 10.

IMG_0801.JPG
Figure 10.  CarPlay settings screen.

The screen in Figure 10 can be found on iPhone by navigating to Settings > General > CarPlay and selecting the CarPlay unit (or units – you can have multiple)…mine is “NissanConnect.”  Moving apps arounds is the same here as it is on the display unit (instructions are present midway down the screen).  Apps that have a minus sign badge can be removed from the CarPlay home screen.  When an app is removed it is relegated to the area just below the CarPlay screen; in Figure 10 that area holds the MLB AtBat app, AudioBooks (iBooks), and WhatsApp.  If I wanted to add any relegated apps to the CarPlay home screen I could do so by pushing the plus sign badge.  Some apps cannot be relegated:  Phone, Messages, Maps, Now Playing, Music, and the OEM app.  Everything else can be relegated.

One thing to note here.  iOS considers the car to be a USB accessory, so CarPlay does have to abide by the USB Restricted Mode setting on iPhone (if enabled).  This is regardless of whether the Allow CarPlay While Locked toggle switch is set to the on position.

The following screenshots show music playback (Figure 11), navigation (Figure 12), and podcast playback (Figure 13).

IMG_0796.PNG
Figure 11.  Music playback.

IMG_0782.PNG
Figure 12.  Navigation in CarPlay.

IMG_0794.PNG
Figure 13.  Podcast playback.

Messages in CarPlay is a stripped-down version of Messages on iPhone.  The app will display a list of conversations (see Figure 14), but it will not display text of the conversations (Apple obviously doesn’t want a driver reading while driving).  Instead, Siri is used for both reading and dictating messages.

IMG_0792.jpg
Figure 14.  Messages conversation list.

Phone is seen in Figures 15; specifically, the Favorites tab.  The tabs at the top of the screens mirror those that are seen on the bottom in the Phone app on iPhone (Favorites, Recents, Contacts, Keypad, and Voicemail).  Those tabs look just like those seen in iPhone.

IMG_0790
Figure 15.  Phone favorites.

IMG_0805
Figure 16.  The keypad in Phone.

If I receive a phone call, I can answer it in two ways:  pressing the green accept button (seen in Figure 17) or pushing the telephone button on my steering wheel.  Answering the call changes the screen to the one seen in Figure 18.  Some of the items in Figure 18 look similar to those seen in iOS in the Car (Figure 4).

IMG_0807
Figure 17.  An incoming call.

IMG_0809
Figure 18.  An active phone call.

Most apps will appear like those pictured above, although, there may be some slight visual/functional differences depending on the app’s purpose, and, again, there may be some further visual differences depending on what car or after-market radio you are using.

Speaking of purpose, CarPlay is designed to do three things:  voice communication, audio playback, and navigation.  These things can be done fairly well through CarPlay, and done safely, which, I believe, is the main purpose.  Obviously, some popular apps, such as Twitter or Facebook, don’t work well in a car, so I don’t expect true social media apps to be in CarPlay any time soon if at all (I could be wrong).

Now that we have had a tour, let’s take a look under the hood and see what artifacts, if any, can be found.

Under the Hood

After snooping around in iOS for a bit I came to a realization that CarPlay is forensically similar to Android Auto:  it merely projects the apps that can work with it on to the car’s display unit, so the individual apps contain a majority of the user-generated data.  Also, like Android Auto, CarPlay does leave behind some artifacts that may be valuable to forensic examiners/investigators,  and, just like any other artifacts an examiner may find, these can be used in conjunction with other data sources to get a wholistic picture of a device.

One of the first artifacts that I found is the cache.plist file under locationd.  It can be found in the private > var > root > Library > Caches > locationd path.  cache.plist contains the times of last connect and last disconnect.  I did not expect to find connection times in the cache file of the location daemon, so this was a pleasant surprise.  See Figure 19.

LastVehicleConnection.jpg
Figure 19.  Last connect and last disconnect times.

There are actually three timestamps here, two of which I have identified.  The timestamp in the red box is the last time I connected to my car. It is stored in CF Absolute Time (aka Mac Absolute Time), which is the number of seconds since January 1, 2001 00:00:00 UTC.  The time, 576763615.86389804, converts to April 12, 2019 at 8:06:56 AM (EDT).  I had stopped at my favorite coffee shop on the way to work and when I hopped back in the car, I plugged in my iPhone and CarPlay initialized.  See Figure 20.

LastConnectTime
Figure 20.  Time of last connect.

The time stamp in the green box just under the string CarKit NissanConnect, is a bit deceptive.  It is the time I disconnected from my car.  Decoding it converts it to April 12, 2019 at 8:26:18 AM (EDT).  Here, I disconnected from my car, walked into work, and badged in at 8:27:14 AM (EDT).  See Figure 21.

LastDisconnectTime
Figure 21.  Time of last disconnect.

The time in the middle, 576764725.40157998, is just under a minute before the timestamp in the green box.  Based on my notes, it is the time I stopped playback on a podcast that I was listening to at the time I parked.  I also checked KnowledgeC.db (via DB Browser for SQLite) and found an entry in it for “Cached Locations,” with the GPS coordinates being where I parked in my employer’s parking lot.  Whether the middle timestamp represents the time the last action was taken in CarPlay is a good question and requires more testing.

The next file of interest here is the com.apple.carplay.plist file.  It can be found by navigating to the private > var > mobile > Library > Preferences path.  See Figure 22.

CarPlay-Plist
Figure 22.  carplay.plist

The area in the red box is of interest.  Here the name of the car that was paired is seen (NissanConnect) along with a GUID.  The fact that the term “pairings” (plural) is there along with a GUID leads me to believe that multiple cars can be paired with the same iPhone, but I wasn’t able to test this as I am the only person I know that has a CarPlay capable car.  Remember the GUID because it is seen again in discussing the next artifact.  For now, see Figure 23.

IMG_0802.JPG
Figure 23.  Main CarPlay setting page in iOS.

Figure 23 shows the settings page just above the one seen in Figure 10.  I show this merely to show that my car is labeled “NissanConnect.”

The next file is 10310139-130B-44F2-A862-7095C7AAE059-CarDisplayIconState.plist.  It can be found in the private > var > mobile > Library > Springboard path.  The first part of the file name should look familiar…it is the GUID seen in the com.apple.carplay.plist file.  This file describes the layout of the home screen (or screens if you have more than one).  I found other files in the same path with the CarDisplayIconState string in their file names, but with different GUIDs, which causes me to further speculate that multiple CarPlay units can be synced with one iPhone.  See Figure 24.

IconList-Plist-1
Figure 24.  CarPlay Display Icon State.

The area in the red and blue boxes represent my home screens.  The top-level Item in the red box, Item 0, represents my first home screen, and the sub-item numbers represent the location of each icon on the first home screen.  See Figure 25 for the translation.

IMG_0769
Figure 25.  Home screen # 1 layout.

The area in the blue box in Figure 24 represents my second home screen, and, again, the sub-item numbers represent the location of each icon on the screen.  See Figure 26 for the translation.

IMG_0771
Figure 26.  Home screen # 2 layout.

The entry below the blue box in Figure 24 is labeled “metadata.”  Figure 27 shows it in an expanded format.

IconList-Plist-2
Figure 27.  Icon state “metadata.”

The areas in the green and purple boxes indicate that the OEM app icon is displayed, and that it is “Nissan” (seen in Figure 26).  The areas in the orange and blue boxes describe how the app icon layout should be (four columns and two rows).  The area in the red box is labeled “hiddenIcons,” and refers to the relegated apps previously seen in Figure 10.  As it turns out, the items numbers also describe their position.  See Figure 28.

IMG_0801
Figure 28.  Hidden icon layout.

Notice that this file did not describe the location of the most recently used apps in CarPlay (the area in the upper left portion of the display screen).  That information is described in com.apple.springboard, which is found in the same path.  See Figure 29.

RecentlyUsedLayout
Figure 29.  Springboard and most recently used apps.

Just like the app icon layout previously discussed, the item numbers for each most recently used app translate to positions on the display screen.  See Figure 30 for the translation.

IMG_0769 1
Figure 30.  Most recently used apps positions.

The next file is the com.apple.celestial.plist, which is found in the private > var > mobile > Library > Preferences path.  This file had a bunch of data in it, but there are three values in this file that are relevant to CarPlay.  See Figure 31.

Celestial.JPG
Figure 31.  Celestial.

The string in the green box represents what app had last played audio within CarPlay prior to iPhone being disconnected from the car.  The area in blue box is self-explanatory (I had stopped my podcast when I parked my car).  The item in the red box is interesting.  I had been playing a podcast when I parked the car and had stopped playback.  Before I disconnected my iPhone, I brought the Music app to the foreground, but did not have it play any music, thus it never took control of the virtual sound interface in CoreAudio. By doing this, the string in the red box was generated.  Just to confirm this, I tested this scenario a second time, but did not bring the Music app to the foreground; the value nowPlayingAppDisplayIDUponCarPlayDisconnect was not present in the second plist file.  I am sure this key has some operational value, although I am not sure what that value is.  If anyone has any idea, please let me know.

As I mentioned earlier in this post, Siri does a lot of the heavy lifting in CarPlay because Apple doesn’t want you messing with your phone while you’re driving.  So, I decided to look for anything Siri-related, and I did find one thing…although I will say that this  is probably not exclusive to CarPlay.  I think this may be present regardless of whether it occurs in CarPlay or not (more testing).  In the path private > var > mobile > Library > Assistant there is a plist file named PreviousConversation (there is no file extension but the file header indicates it is a bplist).  Let me provide some context.

When I pick up my child from daycare in the afternoons, I will ask Siri to send a message, via CarPlay, to my spouse indicating that my child and I are on the way home, and she usually acknowledges.  The afternoon before I extracted the data from my iPhone (04/11/2019), I had done just that, and, after a delay, my spouse had replied “Ok.”

PreviousConversation contains the last conversation I had with Siri during this session. When I received the message, I hit the notification I received at the top of the CarPlay screen, which triggered Siri.  The session went as so:

Siri:                 “[Spouse’s name] said Ok.  Would you like to reply?”

Me:                  “No.”

Siri:                 “Ok.”

See Figure 32.

IncomingMessage.JPG
FIgure 32.  Session with Siri.

The area in the red box is the name of the sender, in this case, my spouse’s (redacted) name.  The orange box was spoken by Siri, and the blue box is the actual iMessage I received from my spouse.  The purple box is what was read to me, minus the actual iMessage.  Siri’s inquiry (about my desire to reply) is seen in Figure 33.

WouldYouLikeToReply.PNG
Figure 33.  Would you like to reply?

Figure 34 contains the values of the message sender (my spouse).  Inside of the red box the field “data” contains the iMessage identifier…in this case, my spouse’s phone number.  The field “displayText” is my spouse’s name (presumably pulled from my Contact’s list).  Figure 35 has the message recipient information:  me.

MessageSender.PNG
Figure 34.  Message sender.

MessageRecipient.PNG
Figure 35.  Message recipient (me) plus timestamp.

Figure 35 also has the timestamp of when the message was received (orange box), along with my spouse’s chat identifier (blue box).

Siri-OK.PNG
Figure 36.  Siri’s response.

Figure 36 shows Siri’s last response to me before the session ended.

Interesting note:  this plist file had other interesting data in it.  One thing that I noticed is that each possible response to the inquiry “Would you like to reply?” had an entry in here:  “Call” (the message sender), “Yes” (I’d like to reply), and “No” (I would not like to reply).  It might be a good research project for someone.  🙂

The next artifact actually comes from a file previously discussed:  com.apple.celestial.plist.  While examining this file I found something interesting that bears mentioning in this post.  My iPhone has never been paired via Bluetooth with my 2019 Nissan.  When I purchased the car, I immediately started using CarPlay, so there has been no need to use Bluetooth (other than testing Android Auto).  Under the endointTypeInfo key I found the area seen in Figure 37.

CarBT.jpg
Figure 37.  What is this doing here?

The keys in the red box contain the Bluetooth MAC address for my car.  I double-checked my Bluetooth settings on the phone and the car, and the car Bluetooth radio was turned off, but the phone’s radio was on (due to my AppleWatch).  So, how does my iPhone have the Bluetooth MAC address for my car?  I do have a theory, so stay with me for just a second.  See Figure 38.

IMG_0814
Figure 38.  AirPlay indicator.

Figure 38 shows the home screen of my iPhone while CarPlay is running.  Notice that the AirPlay/Bluetooth indicator is enabled (red box).  Based on some great reverse engineering, it was found that any device that uses the AirPlay service will use its MAC address in order to identify itself (deviceid).  Now, see Figure 39.

AudioInterfaces
Figure 39. Virtual Audio Interfaces for AirPlay and CarPlay.

Figure 39 shows two files, both of which are in the Library > Audio > Plugins > HAL path.  The file on the left is the info.plist file for the Halogen driver (the virtual audio interface) for AirPlay and the file on the right is the info.plist file for the Halogen driver for CarPlay.  The plug-in identifiers for each (both starting with EEA5773D) are the same.  My theory is that CarPlay may be utilizing AirPlay protocols in order to function, at least for audio.  I know this is a stretch as those of us that use AirPlay know that it typically is done over a wireless connection, but I think there is a small argument to be made here.  Obviously, this requires more research and testing, and it is beyond the scope of this post.

Conclusion

CarPlay is Apple’s attempt at (safely) getting into your car.  It provides a singular screen experience between iPhone and the car, and it encourages safe driving.  While a majority of the user-generated artifacts are kept by the individual apps that are used, there are artifacts specific to CarPlay that are left behind.  The app icon layout, time last connected and disconnected, and last used app can all be found in these artifacts.  There are also some ancillary artifacts that may also be useful to examiners/investigators.

It has been a long time since I really dug around in iOS, and I saw a lot of interesting things that I think would be great to research, so I may be picking on Apple again in the near future.

Google Search Bar & Search Term History – Are You Finding Everything?

Search history.  It is an excellent way to peer into someone’s mind and see what they are thinking at a particular moment in time.  In a court room, search history can be used to show intent (mens rea).  There are plenty of examples where search history has been used in court to establish a defendant’s intent.  Probably the most gruesome was the New York City Cannibal Cop trial, where prosecutors used the accused’s search history against him.  Of course, there is a fine line between intent and protected speech under the First Amendment.

Over the past month and a half I have published a couple of blog posts dealing with Google Assistant and some of the artifacts it leaves behind, which you can find here and here.  While poking around I found additional artifacts present in the same area that have nothing to do with Google Assistant:  search terms.

While I wasn’t surprised, I was; after all, the folder where this data was found had “search” in the title (com.google.android.googlequicksearchbox).  The surprising thing about these search terms is that they are unique to this particular area in Android; they do not appear anywhere else, so it is possible that you or I (or both) could have been missing pertinent artifacts in our examinations (I have missed something).  Conducting a search via this method can trigger Google Chrome to go to a particular location on the Internet, but the term used to conduct the search is missing from the usual spot in the History.db file in Chrome.

My background research on the Google Search Bar (as it is now known) found that this feature may not be used as much as, say, the search/URL bar inside Chrome.  In fact, there are numerous tutorials online that show a user how to remove the Google Search Bar from Android’s Home Screen, presumably to make more space for home screen icons.  I will say, however, that while creating two Android images (Nougat and Oreo), having that search bar there was handy, so I can’t figure out why people wouldn’t use it more.  But, I digress…

Before I get started there are a few things to note.  First, the data for this post comes from two different flavors of Android:  Nougat (7.1.2) and Oreo (8.1).  The images can be found here and here, respectively.  Second, the device used for each image was the same (LG Nexus 5X), and it was rooted both times using TWRP and Magisk.  Third, I will not provide a file structure breakdown here as I did in the Google Assistant blog posts.  This post will focus on the pertinent contents along with content markers within the binarypb files.  I found the binarypb files related to Google Search Bar activity to contain way more protobuff data than those from Google Assistant, so a file structure breakdown is impractical.

Finally, I thought it might be a good idea to give some historical context about this feature by taking a trip down memory lane.

A Quick Background

Back in 2009 Google introduced what, at the time, it called Quick Search Box for Android for Android 1.6 (Doughnut).  It was designed as a place a user could go to type a word or phrase and search not only the local device but also the Internet.  Developers could adjust their app to expose services and content to Quick Search Box so returned results would include their app.  The neat thing about this feature was that it was contextually/location aware, so, for example, I could type the word “weather” and it would display the weather conditions for my current location.  All of this could occur without the need of another app on the phone (depending on the search).

QSB-Doughnut

Google Quick Search Box – circa 2009.

Searching.png

Showtimes…which one do you want?

Prior to Google Assistant, Quick Search Box had a vocal input feature (the microphone icon) that could execute commands (e.g. call Mike’s mobile) and that was about it.  Compared to today this seems archaic, but, at the time, it was cutting edge.

VocalInput.png

Yes, I’m listening.

Fast forward three years to 2012’s Jelly Bean (4.1).  By that time Quick Search Bar (QSB) had been replaced by Google Now, Google’s search and prediction service.  If we were doing Ancestry.com or 23andMe, Google Now would definitely be a genetic relative of Google Search Bar/Google Assistant.  The resemblance is uncanny.

android_41_jelly_bean_ss_08_verge_300.jpg

Mom, is that you?  Google Now in Jelly Bean

The following year, Kit Kat allowed a device to start listening for the hotword “Ok, Google.”  The next big iteration was Now on Tap in 2015’s Marshmallow (6.x), and, with the arrival of Oreo (8.x) we have what we now know today as Google Assistant and the Google Search Bar (GSB).   Recently in Android Pie (9.x) GSB moved from the top part of the home screen to the bottom.

old-navbar-1080x1920

Google Search Bar/Google Assistant at the bottom in Android Pie (9.x).

As of the Fall of 2018 Nougat and Oreo accounted for over half of the total Android install base.  Since I had access to images of both flavors and conducted research on both, the following discussion covers both.  There were a few differences between the two systems, which I will note, but, overall, there was no major divergence.

To understand where GSB lives and the data available, let’s review…

Review Time

GSB and Google Assistant are roommates in both Nougat and Oreo; they both reside in the /data/data directory in the folder com.google.android.googlequicksearchbox.  See Figure 1.

galisting

Figure 1.  GSB & Google Assistant’s home in Android.

This folder holds data about searches that are done from GSB along with vocal input generated by interacting with Google Assistant.  The folder has the usual suspect folders along with several others.  See Figure 2 for the folder listings.

galisting-infile

Figure 2.  Folder listing inside of the googlequicksearchbox folder.

The folder of interest here is app_session.  This folder has a great deal of data, but just looking at what is here one would not suspect anything.  The folder contains several binarypb files, which are binary protocol buffer files.  These files are Google’s home-grown, XML-ish rival to JSON files.  They contain data that is relevant to how a user interacts with their device via Google Assistant and GSB.    See Figure 3.

Figure 3.PNG

Figure 3.  binarypb file (Nougat).

A good deal of the overall structure of these binarypb files differ from those generated by Google Assistant.  I found the GSB binarypb files not easy to read compared to the Google Assistant files.  However, the concept is similar:  there are markers that allow an examiner to quickly locate and identify the pertinent data.

Down in the Weeds

To start, I chose 18551.binarypb in the Nougat image (7.1.2)This search occurred on 11/30/2018 at 03:55 PM (EST).  The search was conducted while the phone was sitting on my desk in front of me, unlocked and displaying the home screen.  The term I typed in to the GSB was “dfir.”  I was presented with a few choices, and then chose the option that took me to the “AboutDFIR” website via Google Chrome.  The beginning of the file appears in Figure 4.

Figure 4.PNG

Figure 4.  Oh hello!

While not a complete match, this structure is slightly similar to that of the Google Assistant binarypb files.  The big takeaway here is the “search” in the blue box.  This is what this file represents/where the request is coming from.  The BNDLs in the red boxes are familiar to those who have read the Google Assistant posts.  While BNDLs are scattered throughout these files, it is difficult to determine where the individual transactions occur within the binarypb files, thus I will ignore them for the remainder of the post.

Scrolling down a bit finds the first area of interest seen in Figure 5.

Figure 5.PNG

Figure 5.  This looks familar.

In the Google Assistant files, there was an 8-byte string that appeared just before each vocal input.  Here there is a four-byte string (0x40404004 – green box) that appears before the search term (purple box).  Also present is a time stamp in Unix Epoch Time format (red box).  The string, 0x97C3676667010000 is read little endian and converted to decimal.  Here, that value is 1543611335575.

Figure 6.PNG

Figure 6.  The results of the decimal conversion.

This time is the time I conducted the search from GSB on the home screen.

Down further is the area seen in Figure 7.   The bit in the orange box looks like the Java wrappers in the Google Assistant files.  The string webj and.gsa.widget.text* search dfir and.gsa.widget.text has my search term “dfir” wrapped in two strings:  “and.gsa.widget.txt.”  Based on Android naming schemas, I believe this to be “Android Google Search Assistant Widget” with text.  This is speculation on my part as I haven’t been able to find anything that confirms or denies this.

Figure 7.PNG

Figure 7.  More search information.

The 4-byte string (green box), my search term (purple box), and the time stamp (red box) are all here.  Additionally, is the string in the blue box.  The string, a 5-byte string 0xBAF1C8F803, is something seen in Google Assistant files.  In the Google Assistant files, this string appeared just prior to the first vocal input in a binarypb file, regardless of when, chronologically, it occurred during the session (remember, the last thing chronologically in the session was the first thing in those binarypb files).  Here, this string occurs at the second appearance of the search term.

Traveling further, I find the area depicted in Figure 8.  This area of the file is very similar to that of the Google Assistant files.

Figure 8.PNG

Figure 8.  A familar layout.

The 16-byte string ending in 0x12 in the blue box is one that was seen in the Google Assistant files.  In those files I postulated this string marked the end of a vocal transaction.  Here, it appears to be doing the same thing.  Just after that, a BNDL appears, then the 4-byte string in the green box, and finally my “dfir” search term (purple box).  Just below this area, in Figure 9, there is a string “android.search.extra.EVENT_ID” and what appears to be some type of identifier (orange box).  Just below that, is the same time stamp from before (red box).

Figure 9.PNG

Figure 9.  An identifier.

I am showing Figure 10 just to show a similarity between GSB and Google Assistant files.  In Google Assistant, there was a 16-byte string at the end of the file that looked like the one shown in Figure 8, but it ended in 0x18 instead of 0x12.  In GSB files, that string is not present.  Part of it is, but not all of it (see the red box).  What is present is the and.gsa.d.ssc. string (blue box), which was also present in Google Assistant files.

Figure 10.PNG

Figure 10.  The end (?).

The next file I chose was 33572.binarypb.  This search occurred on 12/04/2018 at 08:48 AM (EST).  The search was conducted while the phone was sitting on my desk in front of me, unlocked and displaying the home screen.  The term I typed in to the GSB was “nist cfreds.”  I was presented with a few choices, and then chose the option that took me to NIST’s CFReDS Project website via Google Chrome.  The beginning of the file appears in Figure 11.

Figure 11.PNG

Figure 11.  Looks the same.

This looks just about the same as Figure 4.  As before, the pertinent piece is the “search” in the blue box.  Traveling past a lot of protobuff data, I arrive at the area shown in Figure 12.

Figure 12.PNG

Figure 12.  The same, but not.

Other than the search term (purple box) and time stamp (red box) this looks just like Figure 5.  The time stamp converts to decimal 1543931294855 (Unix Epoch Time).  See Figure 13.

Figure 13.PNG

Figure 13.  Looks right.

As before, this was the time that I had conducted the search in GSB.

Figure 14 recycles what was seen in Figure 7.

Figure 14.PNG

Figure 14.  Same as Figure 7.

Figure 15 is a repeat of what was seen in Figures 8 and 9.

Figure 15.PNG

Figure 15.  Same as Figures 8 & 9.

While I am not showing it here, just know that the end of this file looks the same as the first (seen in Figure 10).

In both instances, after having received a set of results, I chose ones that I knew would trigger Google Chrome, so I thought there would be some traces of my activities there.  I started looking at the History.db file, which shows a great deal of Google Chrome activity.  If you aren’t familiar, you can find it in the data\com.android.chrome\app_chrome\Default folder.  I used ol’ trusty DB Browser for SQLite (version 3.10.1) to view the contents.

As it turns out, I was partially correct.

Figure 16 shows the table “keyword_search_terms” in the History.db file.

Figure 16.PNG

Figure 16.  Something(s) is missing.

This table shows search terms used Google Chrome.  The term shown, “george hw bush,” is one that that I conducted via Chrome on 12/01/2018 at 08:35 AM (EST).  The terms I typed in to GSB to conduct my searches, “dfir” and “nist cfreds,” do not appear.  However, viewing the table “urls,” a table that shows the browsing history for my test Google account, you can see when I went to the AboutDFIR and CFReDS Project websites.  See Figures 17 and 18.

Figure 17

Figure 17.  My visit to About DFIR.

Figure 18.PNG

Figure 18.  My visit to NIST’s CFReDS.

The column “last_visit_time” stores the time of last visit to the site seen in the “url” column.  The times are stored in Google Chrome Time (aka WebKit time), which is a 64-bit value in microseconds since 01/01/1601 at 00:00 (UTC).  Figure 19 shows the time I visited AboutDFIR and Figure 20 shows the time I visited CFReDS.

Figure 19

Figure 19.  Time of my visit to About DFIR.

Figure 20

Figure 20.  Time of my visit to NIST’s CFReDS.

I finished searching the Chrome directory and did not find any traces of the search terms I was looking for, so I went back over to the GSB directory and looked there (other than the binarypb files).  Still nothing.  In fact, I did not find any trace of the search terms other than in the binarypb files.  As a last-ditch effort, I ran a raw keyword search across the entire Nougat image, and still did not find anything.

This could potentially be a problem.  Could it be that we are missing parts of the search history in Android?  The History.db file is a great and easy place to look and I am certain the vendors are parsing that file, but are the tool vendors looking at and parsing the binarypb files, too?

As I previously mentioned, I also had access to an Oreo image, so I loaded that one up and navigated to the com.google.android.googlequicksearchbox\app_session folder.  Figure 21 shows the file listing.

Figure 21.PNG

Figure 21.  File listing for Oreo.

The file I chose here was 26719.binarypb.  This search occurred on 02/02/2019 at 08:48 PM (EST).  The search was conducted while the phone was sitting in front of me, unlocked and displaying the home screen.  The term I typed in to the GSB was “apple macintosh classic.”  I was presented with a few choices but took no action beyond that.  Figure 22 shows the beginning of the file in which the “search” string can be seen in the blue box.

Figure 22.PNG

Figure 22.  Top of the new file.

Figure 23 shows an area just about identical to that seen in Nougat (Figures 5 and 12).  My search term can be seen in the purple box and a time stamp in the red box.  The time stamp converts to decimal 1549158503573 (Unix Epoch Time).  The results can be seen in Figure 24.

Figure 23.PNG

Figure 23.  An old friend.

Figure 24

Figure 24.  Time when I searched for “apple macintosh classic.”

Figure 23 does show a spot where Oreo differs from Nougat.  The 4-byte in the green box that appears just before the search term, 0x50404004, is different.  In Nougat, the first byte is 0x40, and here it is 0x50.  A small change, but a change, nonetheless.

Figure 25 shows a few things that appeared in Nougat (Figures 7 & 14).

Figure 25

Figure 25.  The same as Figures 7 & 14.

As seen, the search term is in the purple box, the search term is wrapped in the orange box, the 4-byte string appears in the green box, and the 5-byte string seen in the Nougat and the Google Assistant files is present (blue box).

Figure 26 shows the same objects as those in the Nougat files (Figures 8, 9, & 15).  The 16-byte string ending in 0x12, the 4-byte string (green box), my search term (purple box), some type of identifier (orange box), and the time stamp (red box).

Figure 26.PNG

Figure 26.  Looks familar…again.

While not depicted in this post, the end of the file looks identical to those seen in the Nougat files.

Just like before, I traveled to the History.db file to look at the “keyword_search_terms” table to see if I could find any artifacts left behind.  See Figure 27.

Figure 27.PNG

Figure 27.  Something is missing…again.

My search term, “apple macintosh classic,” is missing.  Again.  I looked back at the rest of the GSB directory and struck out.  Again.  I then ran a raw keyword search against the entire image.  Nothing.  Again.

Out of curiosity, I decided to try two popular forensic tools to see if they would find these search terms.  The first tool I tried was Cellebrite Physical Analyzer (Version 7.15.1.1).  I ran both images through PA, and the only search terms I saw (in the parsed data area of PA) were the ones that were present in Figures 16 & 27; these terms were pulled from the “keyword_search_terms” table in the History.db file.  I ran a search across both images (from the PA search bar) using the keywords “dfir,”“cfreds,” and “apple macintosh classic.”  The only returned hits were the ones from the “urls” table in the History.db file  of the Nougat image; the search term in the Oreo image (“apple macintosh classic”) did not show up at all.

Next, I tried Internet Evidence Finder (Version 6.23.1.15677).  The Returned Artifacts found the same ones Physical Analyzer did and from the same location but did not find the search terms from GSB.

So, two tools that have a good foot print in the digital forensic community missed my search terms from GSB.  My intentions here are not to to speak ill of either Cellebrite or Magnet Forensics, but to show that our tools may not be getting everything that is available (the vendors can’t research everything).  It is repeated often in our discipline, but it does bear repeating here:  always test your tools.

There is a silver lining here, though.  Just to check, I examined my Google Takeout data, and, as it turns out, these searches were present in what was provided by Google.

Conclusion

Search terms and search history are great evidence.  They provide insight in to a user’s mindset and can be compelling evidence in a court room, civil or criminal.  Google Search Bar provides users a quick and convenient way to conduct searches from their home screen without opening any apps.  These convenient searches can be spontaneous and, thus, dangerous; a user could conduct a search without much thought given to the consequences or how it may look to third parties.  The spontaneity can be very revealing.

Two major/popular forensic tools did not locate the search terms from Google Search Bar, so it is possible examiners are missing search terms/history.  I will be the first to admit, now that I know this, that I have probably missed a search term or two.  If you think a user conducted a search and you’re not seeing the search term(s) in the usual spot, try the area discussed in this post.

And remember:  Always.  Test.  Your.  Tools.

Update

A few days after this blog post was published, I had a chance to test Cellebrite Physical Analyzer, version 7.16.0.93.  This version does parse the .binarypb files, although you will get multiple entries for the same search, and some entries may have different timestamps.  So, caveat emptor; it will be up to you/the investigator/both of you to determine which is accurate.

I also have had some time to discuss this subject further with Phil Moore (This Week in 4n6), who has done a bit of work with protobuf files (Spotify and the KnowledgeC database).  The thought was to use Google’s protoc.exe (found here) to encode the .binarypb files and then try to decode the respective fields.  Theoretically, this would make it slightly easier than manually cruising through the hexadecimal and decoding the time manually.  To test this, I ran the file 26719.binarypb through protoc.exe.  You can see the results for yourself in Figures 28, 29, and 30, with particular attention being paid to Figure 29.

Figure 28

Figure 28. Beginning of protoc output.

 

Figure 29

Figure 29.  Middle part of the protoc output (spaces added for readability).

 

Figure 30

Figure 30.  Footer of the protoc output.

In Figure 28 the “search” string is identified nicely, so a user could easily see that this represents a search, but you can also see there is a bunch of non-sensical data grouped in octets.  These octets represent the data in the .binarypb file, but how it lines up with the hexadecimal values/ASCII values is anyone’s guess.  It is my understanding that there is a bit of educated guessing that occurs when attempting to decode this type of data.  Since protobuf data is serialized and the programmers have carte blanche in determining what key/value pairs exist, the octets could represent anything.

That being said, the lone educated guess I have is that the octet 377 represents 0xFF.  I counted the number of 377’s backwards from the end of the octal time (described below) and found that they matched (24 – there were 24 0xFF’s that proceeded the time stamp seen in Figure 23).  Again, speculation on my part.

Figure 29 is the middle of the output (I added spaces for readability).  The area in the red box, as discovered by Phil, is believed to the be the timestamp, but in an octal (base-8) format…sneaky, Google.  The question mark at the end of the string lines up with the question mark seen at the end of each timestamp seen in the figures of this article.  The area in the green box shows the first half of the Java wrapper that was discussed and seen in Figure 25.  The orange box contains the search string and the last half of the Java wrapper.

Figure 30 shows the end of the protoc output with the and.gsa.d.ssc.16 string.

So, while there is not an open-source method of parsing this data as of this writing, Cellebrite, as previously mentioned, has baked this into the latest version of Physical Analyzer, but care should be taken to determine which timestamp(s) is accurate.

OK Computer…er…Google. Dissecting Google Assistant (Part Deux)

NoDisassemble

In part two of this article I will be looking at Google Assistant artifacts that are generated when using a device outside of the car (non-Android Auto). Since this post is a continuation of the first, I will dispense with the usual pleasantries, and jump right into things.  If you have not read Part 1 of this post (dealing with Google Assistant artifacts generated when using Google Assistant via Android Auto), at least read the last portion, which you can do here.  The data (the phone extraction) discussed in both posts can be found here.  Just know that this part will not be as long as the first, and will, eventually, compare the Google Assistant artifacts generated in Android Auto to those generated just using the device.

If you don’t feel like clicking over, let’s recap:

A Slight Review

Google Assistant resides in the /data/data directory.  The folderis com.google.android.googlequicksearchbox.  See Figure 1.

galisting

Figure 1.  Google Assistant’s home in Android.

This folder also holds data about searches that are done from the Quick Search Box that resides at the top of my home screen (in Oreo).  The folder has the usual suspect folders along with several others.  See Figure 2 for the folder listings.

galisting-infile

Figure 2.  Folder listing inside of the googlequicksearchbox folder.

The folder of interest here is app_session.  This folder has a great deal of data, but just looking at what is here one would not suspect anything.  The folder contains several binarypb files, which I have learned, after having done additional research, are binary protocol buffer files.  These files are Google’s home-grown, XML-ish rival to JSON files.  They contain data that is relevant to how a user interacts with their device via Google Assistant.    See Figure 3.

binarypbs

Figure 3.  binarypb files.

Each binarypb file here represents a “session,” which I define as each time Google Assistant was invoked.  Based on my notes, I know when I summoned Google Assistant, how I summoned it, and what I did when I summoned it.  By comparing my notes to the MAC times associated with each binarypb file I identified the applicable files for actions taken inside of the car (via Android Auto) and those taken outside of the car.

During my examination of the binarypb files that were created during sessions inside of the car, I found similarities between each file, which are as follows:

  1. Each binarypb file will start by telling you where the request is coming from (car_assistant).
  2. What is last chronologically is first in the binarypb Usually, this is Google Assistant’s response (MP3 file) to a vocal input just before being handed off to whatever service (e.g. Maps) you were trying to use.  The timestamp associated with this is also at the beginning of the file.
  3. A session can be broken down in to micro-sessions, which I call vocal transactions.
  4. Vocal transactions have a visible line of demarcation by way of the 16-byte string ending in 0x12.
  5. A BNDL starts a vocal transaction, but also further divides the vocal transaction in to small chunks.
  6. The first vocal input in the binarypb file is marked by a 5-byte string: 0xBAF1C8F803, regardless of when, chronologically, it occurred in the session.
  7. Each vocal input is marked by an 8-byte string:   While the 5-byte string appears at the first in the binarypb file only (along with the 8-byte string), the 8-byte string appears just prior to each and every vocal input in the file.
  8. When Google Assistant doesn’t think it understands you, it generates different variations of what you said…candidates…and then selects the one it thinks you said.
  9. In sessions where Google Assistant needs to keep things tidy, it will assign an identifier. There does not appear to be any consistency (as far as I can tell) as to the format of these identifiers.
  10. The end of the final vocal transaction is marked by a 16-byte string ending in 0x18.

Visually, sessions via Android Auto can be seen in Figure 4, and vocal transactions can be seen in Figure 5.

img_0075

Figure 4.  Visual resprensetation of a session.

 

img_0074

Figure 5.  Visual representation of vocal transactions.

One additional notation here.  I was contacted by a reader via Twitter and asked about adding byte offsets to Figures 4 and 5.  Unfortunately, the byte offsets beyond the header are never consistent.  This is due to requests always being different, and, as a result, Google Assistant’s response (whether vocally, by action, or both) are always different.  I think the thing to keep in mind here is that there is a structure and there are some markers to help examiners locate this data.

A Deep Dive

To start, I chose 13099.binarypb.  This session occurred on 01/28/2019 at 12:41 PM (EST) and involved reading new text messages and dictating a response.  The session was initiated by “Ok, Google” with the phone sitting on my desk in front of me while the phone was unlocked and displaying the home screen.  The session went like this:

First Dialogue

Figure 6 shows the top of the binarypb file.  In the blue box is something familiar:  the 0x155951 hex value at offset 0x10.  This string was also present in the binarypb files generated while inside the car (via Android Auto).  In the orange box “opa” appears.  This string appears at the top of each binarypb file generated as a result of using Google Assistant outside of the car.  I suspect (based on other data seen in these files) that this is a reference to the Opa programming language.  This would make sense as I see references to Java, too, which is used throughout Android.  Additionally, Opa is aimed at both client-side and server-side operations (Node.js on the server and JavaScript on the client side).  Again, this is speculation on my part, but the circumstantial evidence is strong.

Figure 6

Figure 6. Top of 13099.binarypb.

In the red boxes are the oh-so-familiar “BNDL’s.” In the green box the string “com.google.android.googlequicksearchbox” is seen.  This is the folder in which the Quick Search Box resides, along with the session files for Google Assistant.

Just below the area in Figure 6 is the area in Figure 7.  There are a couple of BNDL’s in this area, along with the area in the orange.  This string appears to be indicating this part of the file was caused by a change in the conversation between Google Assistant and myself; “TRIGGERED_BY” and “CONVERSATION_DELTA.” See Figure 7.

Figure 7

Figure 7. A change in conversation triggered this vocal transaction

The area in the blue box is interesting as it is a string that is repeated throughout this session.  I suspect…loosely…this is some type of identifier, and the string below it (in Figure 8) is some type of token.

Figure 8.PNG

Figure 8.  ID with possible token…?

I will stop here for a second.  There was a noticeable absence at the top of this file.  There was no MP3 data here.  A quick scan of this entire file finds no MP3 data at all.  Determining whether this is unique to this particular file or systemic trend will require examining other files (later in this article).

After the area in Figure 8 there was quite a bit of protocol buffer data.  Eventually, I arrived at the area depicted in Figure 9.  In it you can see the identifier from Figure 7 (blue box), a bit more data, and then a time stamp (red box).  The value is 0x65148E9568010000, which, when read little endian is 1548697343077 (Unix Epoch Time).  Figure 10 shows the outcome using DCode.

Figure 9.PNG

Figure 9.  Identifier and Unix Epoch Time time stamp.

 

Figure 10.PNG

Figure 10. Time stamp from Figure 9.

The time stamp here is about a minute ahead of when I initiated the session.  Remember what I said about the last thing chronologically being the first thing in the file?  I suspect the last thing I said to Google Assistant will be the first vocal input data I see.  See Figure 11.

Figure 11.PNG

Figure 11.  Last vocal input of the session.

There is one bit of familiar data in here.  If you read the first part of this article you will know that the string in the blue box (0xBAF1C8F803) appeared just before the first vocal input in a binarypb file, which is usually the last vocal input data of the session.  It did not appear anywhere else within the file.  It appears here, too, in a session outside of the car.

In the orange box is what appears to be some Java data indicating where this session started:  “hotword.”  The hotword is the trigger phrase for Google Assistant, which, for me is “Ok, Google.”  The 8-byte string in the green box (0x010C404000040200) is consistent throughout the file (save one location – discussed later), and, as suspected, my last vocal input that I provided Google Assistant (purple box).  A BNDL appears at the end in the red box.

Figure 12 shows some familiar data (from Figures 7 & 8):  TRIGGERED_BY, CONVERSATION_DELTA, the identifier (blue box) and what I believe to be some token (red box).  Note that the suspected token here matches that seen in Figure 8.

Figure 12

Figure 12.  A rehash of Figures 7 & 8.

 

Figure 13.PNG

Figure 13.  The identifier again and another time stamp.

After some more protocol buffer data I find the area in Figure 13.  It looks the same as the area shown in Figure 9, and the time stamp is the same.

Figure 14 is a somewhat recycled view of what was seen in Figure 11, but with a twist.  The Java data which seems to indicate where the query came from wraps the vocal input (“no”); see the orange box.  A BNDL is also present.

Figure 14.PNG

Figure 14.  Vocal input with a Java wrapper.

Also seen in Figure 14 is another time stamp in the red box.  The value is 0x65148E9568010000, which is decimal 1548697279859.  As before, I used DCode to convert this from Unix Epoch Time to 01/28/2019 at 12:41:23 (EST).  This is the time I originally invoked Google Assistant.

Figure 15 shows some more data, and the end of the vocal transaction (see my Part 1 post).  This is marked by the velvet:query_state:search_result_id string (purple box) and the 16-byte hex value of 0x00000006000000000000000000000012 (orange box).  The string and accompanying hex value are the same ones seen in the binarypb files generated by interaction with Google Assistant via Android Auto.

Figure 15

Figure 15.  Data marking the end of the vocal transaction.

Figure 16 shows the start of a new vocal transaction.  The BNDL (seen at the bottom of Figure 15, but not marked) is in the red box.  Just below it is the 8-byte string in the green box.  Note that the last byte is 0x10 and not 0x00 as seen in Figure 11.  My vocal input appears in the purple box; this input is what started the session.  Just below it is another BNDL.  See Figure 16.

Figure 16.PNG

Figure 16.  New vocal transaction.

The items below the BNDL are interesting.  The orange box is something previously seen in this file:  TRIGGERED_BY.  However, the item in the blue box is new.  The string is QUERY_FROM_HOMESCREEN, which is exactly what the phone was displaying when I invoked Google Assistant.  The phone was on, unlocked, and I used the hotword to invoke Google Assistant, which leads me to the string in the brown box: “INITIAL_QUERY.”  The phrase “read my new text messages” was my original request.  This area seems to imply that my phrase was the initial query and that it was made from the home screen.  Obviously, there is plenty of more testing that needs to be done to confirm this, but it is a good hypothesis.

Figure 17.PNG

Figure 17.  A time stamp and a “provider.”

In Figure 17 there is a time stamp (red box):  the decimal value is 1548697279878 (Unix Epoch Time) and the actual time is 01/28/2019 at 12:41:19 (EST).  Again, this is the time Google Assistant was invoked.  The portion in the blue box, while not a complete match, is data that is similar to data seen in Android Auto.  I highlighted the whole box, but the area of interest is voiceinteraction.hotword.HotwordAudioProvider /34.  In the Android Auto version, the related string was projection.gearhead.provider /mic /mic.  In the Part 1 post, I indicated that the /mic /mic string indicated where the vocal input was coming from (my in-car microphone used via Android Auto).  Here I believe this string indicates the origin of the Google Assistant invocation is via the hotword, although I am not completely sure about the /34.

The area in the blue box in Figure 18 is new.  I have tried to find what the data in the box means or its significance, and I have been unable to do so.  In addition to searching the Google developer forums, I pulled the phone’s properties over ADB in order to see if I could determine if the data was referring to the on-board microphone and speaker (ear piece), but the list of returned items did not have any of this data.  At this point I have no idea what it means.  If someone knows, please contact me and I will add it to this article and give full credit.

Figure 18-1-1

Figure 18.  Something new.

I had to scroll through some more protocol buffer data to arrive at the area in Figure 18-1.  There are several things here:  the velvet:query_state:search_result_id with the accompanying 16-byte string ending in 0x12 (brown boxes), BNDLs (red boxes), the 8-byte string just prior to my vocal input (green box), my vocal input (purple box), the TRIGGERED_BY, CONVERSATION_DELTA strings (orange box – my response “yes” was a result in a change in the conversation), and the identifier that I had seen earlier in the file (blue box).  Note that while the string in the green box matches the string seen in Figure 11, it differs from the one seen in Figure 18-1.  The string in Figure 18-1 ends in 0x10 whereas the string here and Figure 11 both end in 0x00.

Figure 18

Figure 18-1.  The end of one vocal transaction and the beginning of another.

Just past the identifier seen in Figure 18-1, there was another string that I suspect is a token.  This string starts out the same as the one seen in Figures 8 and 12, but it does differ.  See Figure 19.

Figure 19.PNG

Figure 19.  A new “token.”

Scrolling through more protocol buffer data finds the area seen in Figure 20.  Here I find another time stamp (red box).  The decoding methodology is the same as before, and it resulted in a time stamp of 01/28/2019 at 12:41:42 (EST).  This would have been around the time that I indicated that I wanted to reply to the text messages (by saying “yes”) Google Assistant had read to me.  Additionally, the Java string appears (orange box), and the end of the vocal transaction is seen with the velvet:query_state:search_result_id and the accompanying 16-byte string ending in 0x12 (blue boxes).

Figure 20

Figure 20.  The end of another vocal transaction.

Figure 21 has my dictated message in it (purple box), along with some familiar data, and a familiar format.

Figure 21

Figure 21.  A familiar face.

At the top is a BNDL (red box), the 8-byte string ending in 0x00 (green box), another BNDL (red box), the TRIGGERED_BY, CONVERSATION_DELTA strings (orange box), and the identifier again (blue box).  In Figure 22 another “token” is found (red box).  This is the same one as seen in Figure 19.

Figure 22.PNG

Figure 22.  Another “token.”

Yet more protocol buffer data, and yet more scrolling takes me to the area in Figure 23.  In the red box is another time stamp.  In decimal it is 1548697307562 (Unix Epoch Time), which converts to 01/28/2019 at 12:41:47 (EST).  This would have been around the time I dictated my message to Google Assistant.  The identifier also appears at the foot of the protocol buffer data (blue box).

Figure 23.PNG

Figure 23.  Another time stamp.

Figure 24 shows the same data as in Figure 20:  the end of a vocal transaction.  The orange box contains the Java data, and the blue box contains the velvet:query_state:search_result_id and the accompanying 16-byte string ending in 0x12.

Figure 24

Figure 24.  End of another vocal transaction.

Beyond my vocal input (purple box), the area seen in Figure 25 is the same as those seen in Figures 18-1 & 21.  I even marked them the same… BNDL (red box), the 8-byte string ending in 0x00 (green box), another BNDL (red box), the TRIGGERED_BY, CONVERSATION_DELTA strings (orange box), and the identifier again (blue box).

Figure 25

Figure 25.  The top of another vocal transaction.

Figure 26 shows an area after some protocol buffer data that trailed the identifier in Figure 25.  The notable thing here is the time stamp in the red box.  It is decimal 1548697321442 (Unix Epoch Time), which translate to 01/28/2019 at 12:42:01 (EST).  This would have lined up with when I sent the dictated text message.

Figure 26.PNG

Figure 26.  Time stamp from “No.”

Figure 27 shows the end of the vocal transaction here.  In the orange box is the Java data, with the velvet:query_state:search_result_id and the accompanying 16-byte string ending in 0x12 in the blue box.

Figure 27.PNG

Figure 27.  The end of a vocal transaction.

Figure 28 looks just like Figures 18-1, 21 & 25.  The only difference here is my vocal input (“no”).  This was the last thing I said to Google Assistant in this session, so I expect this last portion of the file (save the very end) to look similar to the top of the file.

Figure 28

Figure 28.  Look familiar?

Figure 29 contains a time stamp (red box), which appears after a bit of protocol buffer data.  It is decimal 1548697343077 (Unix Epoch Time), which converts to 12:42:23 (EST).  This is the same time stamp encountered in this session file seen in Figure 9.

Figure 29.PNG

Figure 29.  The last/first time stamp.

Figure 30 shows the end of the session file with the orange box showing the usual Java data.  The end of this file, as it turns out, looks very similar to end of session files generated via Android Auto.  Three things are present here that are also present in the end of the Android Auto session files.  First, the velvet:query_state:search_result_id and the accompanying 16-byte string ending in 0x18 in the blue box.  Second, the 9-byte string, 0x01B29CF4AE04120A10 in the purple box. Third, the string “and.gsa.d.ssc.” is present in the red box.

Figure 30

Figure 30.  A familiar ending.

So, right away I see quite a bit of similarities between this session file and the ones generated by Android Auto.  In order to have some consistency between these files and those from Android Auto, the next file I examined involved me asking for directions to my favorite coffee joint.

The next file I examined was 13128.binarypb.  This session occurred on 01/28/2019 at 12:43 PM (EST) and involved reading new text messages and dictating a response.  The session was initiated by “Ok, Google” with the phone sitting on my desk in front of me, unlocked, and displaying the home screen.  The session went like this:

Second Dialogue

The screen switched over to Google Maps and gave me the route and ETA.  I did not choose anything and exited Maps.

The top of 13128.binarypb looks identical to 13099.binarypb (Figure 6).  See Figure 31.

Figure 31

Figure 31.  A familiar sight.

The gang is all here.  The string 0x155951 (blue box), “opa” (orange box), com.google.android.googlequicksearchbox (green box), and a couple of BNDL’s (red box).

While no data of interest resides here, I am including Figure 32 just to show that the top of 13128 is just like 13099.

Figure 32.PNG

Figure 32.  Nothing to see here.

A quick note here: this file is just like 13099 in that there is no MP3 data at the beginning of the file. As before, I scanned the rest of the file and found no MP3 data at all. So, this is a definite difference between the Android Auto and non-Android Auto session files.

Figure 33 is something I had seen in the previous file (see Figure 16), but further down.  The blue and orange boxes contain the TRIGGERED_BY and QUERY_FROM_HOMESCREEN strings, respectively.  Just like my previous session, this session was started with the phone on, unlocked, and by using the hotword to invoke Google Assistant, which leads me to the string in the red box: “INITIAL_QUERY.”  This area seems to imply that whatever vocal input is about to show up is the phrase that was the initial query and that it was made from the home screen.

Figure 33.PNG

Figure 33.  Query From Home Screen, Triggered By, Launched On, & Initial Inquiry.

Figure 34 looks almost identical to Figure 18-1.  The red box contains a time stamp, which is decimal 1548697419294 (Unix Epoch Time).  When converted it is 01/28/2019 at 12:43:39 (EST).  The blue box contains the string voiceinteraction.hotword.HotwordAudioProvider /49.  The /49 is different than the one seen in Figure 18-1, though (/34).  Again, I am not sure what this is referring to, and I think it warrants more testing.

Figure 34.PNG

Figure 34.  The query source and a time stamp.

Scrolling down just a hair finds the area in Figure 35.  The orange box contains Java data we have seen before but with a small twist.  The string webj and.opa.hotword* search and.opa.hotword, with the twist being “search” in the middle.  As seen in the first file, it’s almost as if the term in the middle is being wrapped (my “no” was in wrapped as seen in Figure 14).

Figure 35.PNG

Figure 35.  Something old and something old.

The area in the red box is the same data seen in Figure 18.

Figure 36 also contains some familiar faces.  My vocal input is in the purple box, the 5-byte blue string that usually appears at the first vocal input of the session, 0xBAF1C8F803, is here.

Figure 36.PNG

Figure 36.  The first vocal input of the session.

An 8-byte string previously seen in 13099 is also here (see Figure 16).  Note that this string ends in 0x10.  In 13099 all of the 8-byte strings, save one, ended in 0x00.  The one that did end in 0x10 appeared with the first vocal input of the session (“read my new text messages”).  Here, we see the string ending in 0x10 with the only vocal input of the session.  I hypothesize that the 0x10 appears before the first vocal input of the session, with any additional vocal input appearing with the 8-byte string ending in 0x00.  More research is needed to confirm, which is beyond the scope of this article.

Figures 37 and 38 shows the same data as seen in Figure 33 and 34.

Figure 37.PNG

Figure 37.  Same ol’ same ol’.

 

Figure 38.PNG

Figure 38.  Same ol’, part deux.

Figure 39 shows the mysterious string with the speaker id (red box) and Figure 40 shows my vocal input inside of a Java wrapper (orange box), which is similar to what was seen in 13099 (Figure 14).

Figure 39

Figure 39.  Speaker identifier?

 

Figure 40.PNG

Figure 40.  A Java wrapper and a time stamp.

The time stamp seen in Figure 40 is the same as the other time stamps seen in this session file except for the first byte.  The other bytes are 0x1E, whereas the byte seen here is 0x08; this causes the decimal value to shift from 1548697419294 to 1548697419272.  Regardless, the time here is the same:  01/28/2019 at 12:43:39 PM (EST).  The millisecond value is different:  294 versus 272, respectively.

Figure 41 shows the end of the vocal transaction, which is marked by the  velvet:query_state:search_result_id and the accompanying 16-byte string ending in 0x12 in the blue box.

Figure 41.PNG

Figure 41.  The end of the vocal transaction.

The start of a new vocal transaction is seen in Figure 42.  The 8-byte value seen in the green box ends with 0x10, which keeps in line with my theory discussed earlier in this article.  My vocal input (the only input of the session) is seen in the purple box.  A BNDL is seen at the start of the transaction (red box) with another one at the end (red box).

Figure 42.PNG

Figure 42.  The start of another vocal transaction.

In the interest of brevity, I will say that the next bit of the session file is composed of what is seen in Figures 37, 38, and 39 (in that order).  The time stamp is even the same as the one seen in Figure 38.  The next area is the last part of the session file as seen in Figure 43.

Figure 46

Figure 43.  The end!

If Figure 43 looks familiar to you, that is because it is.  I color coded the boxes the same way as I did in Figure 31.  Everything that was there is here:   the Java data (orange box), the velvet:query_state:search_result_id and the accompanying 16-byte string ending in 0x18 in the blue box,  the 9-byte string, 0x01B29CF4AE04120A10 in the purple box, and the string “and.gsa.d.ssc.” is present in the red box.

So What Changed…If Anything?

At the beginning of this article I reviewed some consistencies between the Android Auto session files I examined.   After examining he non-Android Auto files, I thought it would be beneficial to revisit those consistencies to see what, if anything changed.  The original statements are in italics, while the status here is just below each item.

  • Each binarypb file will start by telling you where the request is coming from (car_assistant).

This is still correct except “car_assistant” is replaced by “opa” and “googlequicksearchbox.”

  • What is last chronologically is first in the binarypb file. Usually, this is Google Assistant’s response (MP3 file) to a vocal input just before being handed off to whatever service (e.g. Maps) you were trying to use.  The timestamp associated with this is also at the beginning of the file.

This is still correct, minus the part about the MP3 data.

  • A session can be broken down in to micro-sessions, which I call vocal transactions.

This is still correct.

  • Vocal transactions have a visible line of demarcation by way of the 16-byte string ending in 0x12.

This is still correct.

  • A BNDL starts a vocal transaction, but also further divides the vocal transaction in to small chunks.

This is still correct.

  • The first vocal input in the binarypb file is marked by a 5-byte string: 0xBAF1C8F803, regardless of when, chronologically, it occurred in the session.

This is still correct.

  • Each vocal input is marked by an 8-byte string:   While the 5-byte string appears at the first in the binarypb file only (along with the 8-byte string), the 8-byte string appears just prior to each and every vocal input in the file.

Eh…sorta.  While the values in the 8-bytes change between Android Auto and non-Android Auto, there is some consistency within the fact that there is a consistent 8-byte string.  Further, the last byte of the 8-byte string in the non-Android Auto version varies depending on whether or not the vocal input is chronologically the first input of the session.

  • When Google Assistant doesn’t think it understands you, it generates different variations of what you said…candidates…and then selects the one it thinks you said.

Unknown.  Because I was in an environment which was quiet, and I was near the phone, Google Assistant didn’t seem to have any trouble understanding what I was saying.  It would be interesting to see what would happen if I introduced some background noise.

  • In sessions where Google Assistant needs to keep things tidy, it will assign an identifier. There does not appear to be any consistency (as far as I can tell) as to the format of these identifiers.

This is correct.  In the 13099 file, there were multiple things happening, so an identifier with something that resembled a token was present.

  • The end of the final vocal transaction is marked by a 16-byte string ending in 0x18.

Still correct.

For those of you that are visual learners, I am adding some diagrams at the end that shows the overall, generalize structure of both a session and a vocal transaction.  See Figures 44 and 45, respectively.

OutOfCar-SessionFile.JPG

Figure 44.  Session file.

 

OutOfCar-VocalTransaction

Figure 45.  Vocal transaction.

Conclusion

There is way more work to do here in order to really understand Google Assistant.  Phil Moore, of This Week in 4n6 fame, mentioned the Part 1 of this article recently on the This Month in 4N6 podcast, and he made a very accurate statement:  Google Assistant is relatively under researched.  I concur.  When I was researching this project, I found nothing via Google’s developer forums, and very little outside of those forums.  There just isn’t a whole lot of understanding about how Google Assistant behaves and what it leaves behind on a device.

Google Assistant works with any device that is capable of running Lollipop (5.0) or higher; globally, that is a huge install base!  Additionally, Google Assistant can run on iOS, which adds to the install base, and is a whole other batch of research.  Outside of handsets there are the Google Home speakers, on which there has been some research, Android TVs, Google Home hubs, smart watches, and Pixelbooks/Pixel Slates. Google is making a push in the virtual assistant space, and is going all in with Google Assistant (see Duplex). With the all of these devices capable of running Google Assistant it is imperative that practitioners learn how it behaves and what artifacts it leaves behind.

OK Computer…er…Google. Dissecting Google Assistant (Part 1)

A few weeks ago, I posted a blog about some research I conducted on Android Auto, and I mentioned there was some interesting data left behind by Google Assistant when using Android Auto.  Based on what I found there, I decided to go further down the virtual assistant rabbit hole to see what I could find.

As far as virtual assistants go, I use Siri.  When I had a newborn, Siri was used a lot.  In addition to turning on/off lights or playing music, I used Siri to turn on certain appliances (via smart plugs), respond to texts, and make phone calls as my hands were usually holding a baby, trying to do something for/to/with a baby, or I was trying to help my spouse do those things.  Siri was really helpful then.  Siri is still useful, but, nowadays, my primary use of Siri is in the car.  There are times where I still yell at my phone or HomePod to find a TV show, play a song, turn on a light, or answer a quick math question.  For other things such as search and typing a message (outside of the car), I’m old fashioned.

I have been fascinated by talking computers/A.I. for a long time.  According to my parents, I used to love Knight Rider.  It had the Knight Industries Two Thousand (K.I.T.T.) – the snarky, crime-fighting automotive sidekick of the Hoff.  As you can see from the GIF above, I also like Star Trek.  Captain Kirk, Spock, et al. have been verbally interacting with their computers since the 1960’s.  WOPR/Joshua, check.  Project 2501, check.  The Architect, check.  And, the crème de la crème:  HAL 9000, check check check.

While researching Google Assistant, I stumbled across an article that had some interesting statistics.  In 2017, there was a 128.9% year-over -year increase in the use of voice-activated assistants, with an expected growth of 40% in 2019.  Another statistic:  2 out of 5 adults use voice search at least once a day; this was in 2017, so I suspect this number is higher by now.  This explosion, in my opinion, started when Alexa arrived.  Say what you like about Amazon, but they were smart to open Alexa to developers.  Alexa is everywhere and owns around 70% of the virtual assistant market.  With Amazon’s recent acquisition of eero (to my dismay), wireless routers, the televisions of the 21st century, will have Alexa in them.  Alexa. Is. Everywhere.  I am sure Google will follow, so I feel it is important to understand any artifact Google Assistant may leave behind on a device.

There are three things to be aware of before I get started.  First, the data I found resides in the /data/ directory, and, thus, is not easily accessible on newer devices unless the device is rooted, or you have a tool that can access this area.  Second, a warning:  this post is lengthy, but in order to understand the patterns and data in the three files I examine, it is necessary.

Finally, this will be a two-parter.  There was way too much data to cover everything in just one post.  This post will examine the data left behind by Google Assistant when used via Android Auto.  The second post will examine the data left behind by Google Assistant when used outside of the car (i.e. when I yell at the device).

One final note:  this data was generated on a rooted Nexus 5X running Android Oreo (8.1), with a patch date of December 5, 2018.  The data used in this article can be found here.

This research is an off-shoot of what I did with Android Auto, so if you want the full backstory, you can read the post here.  But…

Let’s Review

Google Assistant resides in the /data/data directory.  The folder is com.google.android.googlequicksearchbox.  See Figure 1.

galisting

Figure 1.

This folder also holds data about searches that are done from the Quick Search Box that resides at the top of my home screen (in Oreo).  This box has been lurking, in some fashion or another, since Doughnut, so it has had around 10 years or so to mature.  The folder has the usual suspect folders along with several others.  See Figure 2 for the folder listings.

galisting-infile

Figure 2.

The folder of interest here is app_session.  This folder has a great deal of data, but just looking at what is here one would not suspect anything.  The folder contains several .binarypb files, which I have learned, after having done additional research, are binary protocol buffer files.  These files are Google’s home-grown, XML-ish rival to JSON files.  They contain data that is relevant to how a user interacts with their device via Google Assistant.    See Figure 3.

binarypbs

Figure 3.

Each .binarypb file here represents a “session,” which I define as each time Google Assistant was invoked.  Based on my notes, I know when I summoned Google Assistant, how I summoned it, and what I did when I summoned it.  The first time I summoned Google Assistant is represented in the file with the last five digits of 43320.binarypb.  Figure 4 shows the header of the file.

car-readmessageheader

Figure 4.

The ASCII “car_assistant” seems to imply this request had been passed to Google Assistant from Android Auto.  In each test that I ran in Android Auto, this phrase appeared at the beginning of the file.  Additionally, the string in the smaller orange box (0x5951EF) appeared at the beginning of the file at the same byte offset each time (0x11).  I hesitate to call this a true “file header,” though.

If you read my Android Auto post, you will know the string in the red box is the start of a MP3 file.  You can see the end of the MP3 file in Figure 5.

lame

Figure 5.

The string in the orange box is the marker of the LAME MP3 codec, and the strings in the red boxes in Figures 4 and 5 are what I called “yoda” strings. Seeing these things, I carved from the first yoda (seen in Figure 4), to the last (seen in Figure 5), for a total of 11.1 KB.  I then saved the file with no extension and opened it in VLC Player.  The following came out of my speakers:

“You’ve got a few choices.  Pick the one you want.”

Based on my notes, this was the last phrase Google Assistant spoke to me via Android Auto prior to handing me off to Maps.  In this session, I had asked for directions to Starbucks and had not been specific about which one, which caused the returned reply that I had just heard.  There was other interesting data in this file, such as the text of what I had dictated to Google Assistant.  I began to wonder if it would be possible to determine if there were any patterns, identifying, or contextual data in this file that would be useful to or could act as “markers” for digital forensic practitioners.  Using 43320.binarypb as a starting point, I set off to see if I could map this file and the others on which I had taken notes.

Looking at these files in hex and ASCII, I started to notice a bit of a pattern.  While there is a difference between interactions in the car (via Android Auto) and outside of the car (yelling at the phone), there are some high-level similarities between these files regardless of how Google Assistant is used.  Below, I examine the data that is generated by Google Assistant via Android Auto.  The second part of this post will examine the data left behind when outside of the car.

A Deep Dive

I chose 43320.binarypb as my starting point on purpose:  there was a single request for directions in this session.  I thought the file would be straight forward, and I was right…sorta.

The session was started via Android Auto, and I had invoked Google Assistant via a button on my steering wheel (the phone was connected to the car).  The session went like this:

            Me:                  “I need directions to Starbucks.”

// Google Assistant thought for a few seconds //

            GA:                 “You’ve got a few choices.  Pick the one you want.”

After that I was handed off to Maps and presented with a few choices.  I chose a particular location and route, and then went on my way.  Figure 4 shows the top of the file, and I have already mentioned the MP3 data (Figures 4 and 5), so I will skip that portion of the file.

The first area of the file after the MP3 portion was a 4-byte string, BNDL (0x42444C02).  Just make note of this for now, because it comes up, a lot.  After BNDL there was some information about the version of Android Auto I was running, and, potentially, where the voice input was coming from (/mic /mic); see the red box and orange box, respectively, in Figure 6.

Figure 6

Figure 6.

There is an additional string in there that, if you weren’t paying attention, you would miss as it’s stuck at the end of some repetitive data.  I certainly missed it.  Take a look at the string in the blue box in Figure 6.  The string is 0x30CAF25768010000 (8 bytes) and appears at the end of some padding (please do not judge – I couldn’t come up with a better name for those 0xFF’s).  I read it little endian, converted it to decimal, and got a pleasant surprise:  1547663755824.  I recognized this format as Unix Epoch Time, so I turned to DCode, and had my Bob Ross-ian moment.  See Figure 7.

Side note:  I had been trying to find a date/time stamp in this file for two weeks, and, as frequently happens with me, I found it by accident. 

Figure 7

Figure 7.

Based on my notes, this is when I had summoned Google Assistant in order to ask for directions:  01/16/2019 at 13:35 (EST).

Next, com.google.android.apps.gsa.shared.search.QueryTriggerType (red box) caught my attention.  Just below it was the following:  webj gearhead* car_assistant gearhead (green box).   If you read my Android Auto post, you will know the title of the folder in which Android Auto resides has “gearhead” in it (com.google.android.projection.gearhead).  So, does this indicate Google Assistant was triggered via Android Auto?  Maybe…or maybe not.  This could be a one off.  I filed this away and continued.  See Figure 8.

Figure 8.PNG

Figure 8.

The next thing is something I mentioned in the Android Auto post.  A 5- byte string (0xBAF1C8F803) and an 8-byte string (0x014C604080040200) that appeared just above my actual vocal inquiry.  They can be seen in Figure 9:  the 5-byte string is in the blue box, the 8-byte string is in the green box, and the voice inquiry is in the top purple box.  Take note that there is a variation of what I actually said in the bottom purple box.  Also note the BNDL in the red box.

Figure 9

Figure 9.

Below that is data I had seen earlier in the file (in Figure 6):  the Android Auto version number, /mic /mic (orange box), the same time stamp I had just seen (purple box) QueryTriggerType with webj gearhead* car_assistant I need directions to Starbucks gearhead (green box).  And, there is BNDL again.  See Figures 10 and 11.

Figure 10

Figure 10.

Figure 11

Figure 11.

I want to draw attention to two additional things.  The first, in Figure 11, is another time stamp in the blue box.  This is another Unix time stamp (0xB176F15768010000).  This time is 01/16/2019 at 13:34:28 (EST). which is just under 1:30 earlier than the time stamp I had seen previously (when I invoked Google Assistant).  This is odd, but the string just below it may have something to do with it:  com.google.android.apps.gsa.shared.logger.latency.LatencyEvents.  I will say that 13:34 is when I connected the phone to the car and started Android Auto.

The second area is in the red box in Figure 12-1.  There you see the following:  velvet:query_state_search_result_id (red box) and then a 16-byte string ending in 0x12 (blue box).  This area appears in every Google Assistant session I have examined.  I have a theory about it but will wait until later to explain.  As with BNDL, just put it to the side for the moment.

Figure 12-1

Figure 12-1.

Figure 12-2

Figure 12-2.

In Figure 12 -1, you can also see BNDL (yellow box), the 8-byte green box string just prior to my vocal inquiry, and then the inquiry itself (purple box).  After a bit of padding, there is BNDL.   After that, in Figure 12-2, there is the same data seen in Figures 6 and 10 (orange box) , and…what’s this?  Another time stamp (red box)?  I did the same thing as before and got another Unix Epoch Time time stamp.

 As with the previous time stamp, this one is also prior to the first time stamp I had encountered in the file, although it is within the same minute in which I had invoked Google Assistant.  As before, this time stamp appears just before the string that contains LatencyEvents.  Does this have something to do with any latency the device is experiencing between it and Google’s servers?  Again, I am not sure.

Below this time stamp is a replay of what I had seen in Figure 10 (Figure 12-2 – orange box).  The area I discussed in Figure 11 is also present, sans my vocal input (purple).  See Figure 13.

Figure 13

Figure 13.

After that last BNDL, the same items I have already discussed are recycled again, and the first time stamp I had found is present again (red box).  See Figures 14-1, 14-2, and 14-3.

Figure 14-1.PNG

Figure 14-1.

Figure 14-2

Figure 14-2.

Figure 14-3

Figure 14-3.

The very last portion of the file is velvet:query_state:search_result_id (orange box) along with the 16-byte string (purple box); however, there is a small twist:  the last byte has changed from 0x12 to 0x18.  Just after that string is a 9-byte string, 0x01B29CF4AE04120A10 (blue box).  This string appears at the end of each session file I have examined, along with the string and.gsa.d.ssc (red box).  See Figure 15.

Figure 15

Figure 15.

So, just in this one file I saw patterns within the file, and recurring strings.  Were these things unique to this particular file, or does this pattern span across all of these files?

The next file I chose was 12067.binarypb.  As before, there was a single request for directions in this session.  This session, I was a bit more specific about the location for which I was looking.

This session was also started via Android Auto, and I had invoked Google Assistant via a button on my steering wheel (the phone was connected to the car).  The session went like this:

           Me:                  “Give me directions to the Starbucks in Fuquay Varina.”

                                    // Google Assistant thought for a few seconds //

           GA:                 “Starbucks is 10 minutes from your location by car and light traffic.”

As can be seen in Figure 16, the strings 0x5951EF and car_assistant can be seen at the top of the file.  Unlike the previous file, however, there is an additional bit of data here:  com.android.apps.gsa.search.core.al.a.au, a BNDL, and ICING_CONNECTION_INITIALIZATION_TIME_MSEC.  The “yoda” is also here.  See the blue, green, purple, orange, and red box, respectively, in Figure 16.

Figure 16

Figure 16.

Figures 17-1 and 17-2 show the end of the MP3 data, a BNDL, and then some data seen in the 43320.binarypb file:  the Android Auto version number, /mic /mic (orange box), a time stamp (red box), and QueryTriggerType with the webj gearhead* car_assistant gearhead (green box).  The time stamp here is 0x9FC2CB5B68010000, which, when converted to decimal, is 1547728306847.  Just like the previous file, this is also Unix Epoch Time.  I used DCode to convert, and got 01/17/2019 at 07:31:46 (EST).

Figure 17-1

Figure 17-1.

Figure 17-2

Figure 17-2.

According to my notes, this is the time I invoked Google Assistant, and asked for directions.

Traveling slightly further down I arrive in the area seen in Figure 18.  Here I find the 5-byte (blue box) and 8-byte strings (green box) I had seen in 43320.binarypb.  Then I see my request (purple box).  Also note the lower purple boxes; these appear to be what I said, and variations of what I said.  Just before each new variation, there is a number (4, 3, 5, and 9).  I will note that the text behind 4 and 5 differ only by the period at the end of the request.    I suspect that these numbers are assigned to each variation to keep tabs on each; however, I am not sure why.  There is also a BNDL at the end of this area (red box).

Figure 18

Figure 18.

Just below the requests I found some familiar information (Figures 19-1 and 19-2).  The Android Auto version number, /mic /mic (purple box), a time stamp (orange box), and QueryTriggerType with the webj gearhead* car_assistant gearhead (green box) are all here.  The time stamp here is the same as the previous one.  There is an additional piece of data here; just past the webj gearhead* car_assistant string is the 4give me directions to the Starbucks in Fuquay Varina gearhead (blue box).  There is also a BNDL at the end (red box).

Figure 19-1

Figure 19-1.

Figure 19-2

Figure 19-2.

Below the area in Figure 19-2, there is a time stamp (Figure 20) shown in a blue box.  The string (0xBC32C65B68010000) results in a Unix Epoch Time (1547727942332) of 01/17/2019 at 07:25:42 (EST), which is just under six minutes prior to my invocation of Google Assistant.  This time stamp appears just before com.google.android.apps.gsa.shared.logger.latency.LatencyEvents again.   This time coincided with me starting a podcast through Android Auto.

Figure 20

Figure 20.

Below the time stamp, the velvet:query_state_search_result_id appears again in Figures 21-1 and 21-2, along with the 16-byte string ending in 0x12 (green box) and a BNDL, the 8-byte string, and then my vocal inquiries and their variations, and another BNDL.  See the red, green, blue, purple, and orange boxes, respectively.

Figure 21-1

Figure 21-1.

Figure 21-2

Figure 21-2.

Just after the BNDL is the information about the Android Auto version I was using, the /mic /mic string (orange box), and a Unix Epoch Time time stamp (red box).  This one is the same as the first one I had seen in this file (the time I invoked Google Assistant).  See Figure 22.

Figure 22

Figure 22.

Below that are some new things.  First, the text of the MP3 file at the beginning of this file (purple box).  Second, a string that fits a pattern that I see in other files:  xxxxxxxxxxxx_xxxxxxxxx (green box).  The content of the string is different, but, most of the time, the format is 12 characters underscore 12 characters.  I am not sure what these are, so if any reader knows, please let me know so I can add it here (full credit given).  For the purposes of this article I will refer to it as an identifier string.

Also present is the URL for the location I asked for in Google Maps (orange box), and another identifier string (yellow box).  Beyond that, is the velvet:query_state_search_result_id string, along with the 16-byte string ending in 0x18 (red box), the 9-byte string (0x01B29CF4AE04120A10 – blue box), and the string and.gsa.d.ssc (yellow box).  See Figures 23-1 and 23-2.

Figure 23-1

Figure 23-1.

Figure 23-2

Figure 23-2.

So, for those keeping score, let’s review.  While each request was slightly different, there were some consistencies between both files.  The format, in particular, was fairly close:

  1. The beginning of the file (the 3-byte 0x5951EF string and “car_assistant).
  2. The MP3 audio at the beginning of the file which contains the last audio interaction prior to being sent to a different app (Maps).
  3. BNDL
  4. Android Auto Version along with the /mic /mic
  5. The date/time stamp of when Google Assistant is invoked, which appears just after some padding (0xFF).
  6. A 5-byte string (0xBAF1C8F803) that appears just before the vocal input appears the first time in a file. This string only appears here, and does not appear elsewhere.
  7. An 8-byte string (0x014C604080040200) that appears just before the vocal input, regardless of where it appears within the file.
  8. Text of the vocal input.
  9. BNDL
  10. Android Auto Version along with the /mic /mic
  11. Another date/time stamp of when Google Assistant was invoked (same as the first).
  12. The string webj gearhead* car_assistant <my vocal input> gearhead (what I actually said)
  13. BNDL
  14. What I have decided to call a “latency time stamp,” although, it may indicate the last time any activity was done via Android Auto (including starting Android Auto) prior to the invocation of Google Assistant.
  15. The velvet:query_state:search_result_id string appears along with the 16-byte string ending in 0x12.
  16. Items 7, 8, 9, 10, and 11 recycle.
  17. The velvet:query_state:search_result_id string appears along with the 16-byte string ending in 0x18, which appears at the end of the file.
  18. The 9-byte string 0x01B29CF4AE04120A10 after Item 17.
  19. The string gsa.d.ssc that appears after Item 18.

There is some divergence in the files.  In 43320, the items 7, 8, 9, 10, and 11 recycle a second time, whereas they recycle only once in 12067, and it also contains an extra latency time stamp that was not present in 12067.  Additionally, 12067 contains some extra data at the end of the file, specifically, the text of the MP3 file at the start of the file, an identifier string, a URL for Maps, and another identifier string.

File 12067 also had some extra data at the beginning that did not appear in 43320.

I also used Android Auto to test sending and receiving messages, and the file that represents that test is 22687.binarypb.  There were three sessions on 01/27/2019  The first session, which started at 14:16 (EST) went as follows:

Chat 1

About two minutes later at 14:16, a second session was started.  It went as follows:

Chat 2

About 3 minutes later (14:21 EST) I asked for directions using the same method as before (invocation via a button on my steering wheel).  The session went as follows:

Chat 3

The first thing I notice is there is a single binarypb file for 01/27/2019 (22687), even though there were three sessions.  Inspection of the file finds the 3-byte string, 0x5951EF, is present along with car_assistant string.  There is also a “yoda.”  See the orange, blue, and red boxes, respectively in Figure 24.  I carved from the yoda in Figure 24 to the end of the padding in Figure 25 (orange box).

FIgure 24

Figure 24.

Figure 25

Figure 25.

The following came out of my computer speakers:

“Smithfield Chicken and BBQ is 51 minutes from your location by car and light traffic.”

Now, this is interesting.  The first two sessions, which started at 14:16 and 14:18, did not include anything regarding directions.  The third session at 14:21 did involve directions.  I wonder if the fact that the three sessions were so close together that Google Assistant/Android just made one binarypb file to encompass all three sessions.  That would require more testing to confirm (or disprove) but is beyond the scope of this exercise and article.

Figures 26-1 and 26-2 shows the end of the MP3 data and some familiar data:  the Android Auto version information, /mic /mic, and a time stamp.  It also shows the QueryTriggerType and the webj gearhead* car_assistant gearhead string.  See the blue, orange, red, purple, respectively.  The time stamp here is 0x38CCC29068010000.  I converted to decimal (1548616911928), fired up DCode and got 01/27/2019 at 14:21:51 (EST).  This time is when I started the session in which I asked for directions.

Figure 26-1

Figure 26-1.

Figure 26-2

Figure 26-2.

Below that is some more familiar data.  The 5-byte string (0xBAF1C8F803) and 8-byte string (0x014C604080040200) appear (blue and green boxes in Figure 27-1), and there is the vocal input from my request for directions (that occurred roughly three minutes later).  There are also variations of the actual vocal input; each variation is designated by a letter (J, K, O, P, and M) (purple boxes).  After the variations, is the Android Auto version string, the /mic /mic string, and the same timestamp from before (orange and red boxes) (Figure 27-2).

Figure 27-1

Figure 27-1.

Figure 27-2

Figure 27-2.

The QueryTriggerType (red box) appears along with the webj gearhead* car_assistant J get me directions to the Smithfield Chicken & BBQ in Warsaw North Carolina gearhead (green box).  A BNDL appears (blue box), and then another time stamp (purple).  The byte string is 0x171DBC9068010000, and, in decimal is 1548616473879.  This converts to 01/27/2019 at 14:14 (EST), which is the time I hooked the phone to the car and started Android Auto.    See Figure 28.

Figure 28

Figure 28.

After that data is the velvet:query_state:search_result_id string, the accompanying 16-byte string ending in 0x12 (orange box), and a BNDL (blue box).  The 8-byte string (0x014C604080040200) appears (green box), my vocal input that started the first session (“read my newest text message” – purple box), and then a BNDL (blue box).  After that it is the Android Auto version, /mic /mic (yellow box), and a timestamp (red box).  See Figures 29-1 and 29-2.  The time stamp here is 0x0385BD9068010000, which is decimal 1548616566019.  When converted, it is 01/27/2019 at 14:16:06 (EST), which is the time of the first invocated session.

Figure 29-1

Figure 29-1.

Figure 29-2

Figure 29-2.

Also in Figure 29-2 is the QueryTriggerType and the webj gearhead* car_assistant (dark purple box) string.

Figure 30 has some new data in it.  A string appears that, while not completely similar, is something I had seen before.  It appears to be an identifier string:  GeQNOXLPoNc3n_QaG4J3QCw.  This is not 12 characters underscore 12 characters, but it is close.  Right after the identifier string is my vocal input “read my new text message.”  See the red box, and blue box, respectively in Figure 30.

Figure 30

Figure 30.

Figures 31-1 and 31-2 shows the two new text messages that were identified.  See the blue and red boxes.

Figure 31-1

Figure 31-1.

Figure 31-2

Figure 31-2.

Scrolling down a bit I find another identifier string:  eQNOXLPoNc3n_QaG4J3QCw (red box).  This identifier is the same as the first one, but without the leading “G.”  After this identifier is the velvet:query_state:search_result_id and the accompanying 16-byte string ending in 0x12 (orange box).  A BNDL appears at the end (green box).  See Figure 32.

Figure 32

Figure 32.

Next up is the 8-byte string (0x014C604080040200), and my next vocal input “read it.”  Just below my vocal input is the Android Auto version information, /mic /mic, and a time stamp.  Just below the time stamp is the QueryTriggerType and the webj gearhead* car_assistant gearhead strings (not pictured).  See the blue, orange, red, and purple boxes, respectively in Figures 33-1 and 33-2.  The time stamp here is 0xD796BD9068010000.  I converted it to decimal (1548616570583), fired up DCode and got 01/27/2019 at 14:16:10 (EST).  While I was not keeping exact time, this would seem to be when Google Assistant asked me whether or not I wanted to read the chat message from Josh Hickman.

Figure 33-1

Figure 33-1.

Figure 33-2

Figure 33-2.

There is another identifier string further down the file:  GeQNOXLPoNc3n_QaG4J3QCw and just below it my vocal input “read my newest text message.”  See the blue and red boxes, respectively, in Figure 34.  This is interesting.  Could it be that Google Assistant is associating this newest vocal input (“read it”) with the original request (“read my newest text message”) by way of the identifier string in order to know that the second request is related to the first?  Maybe.  This would definitely require some additional research if the case.

Figure 34

Figure 34.

Figures 35 and 36 show the text messages that were new when the request was made.

Figure 35

Figure 35.

Figure 36

Figure 36.

After some gobbly-goo, I found another identifier string:  gwNOXPm3FfKzggflxo7QDg (red box).  This format is completely different from the previous two I had seen.  Maybe this is an identifier for the vocal input “read it.”  Maybe it’s a transactional identifier…I am not sure.  See Figure 37.

Figure 37

Figure 37.

In Figure 37 you can also see the velvet:query_state:search_result_id and the 16-byte string ending in 0x12 (orange box), a BNDL, (blue box) the 8-byte string (green box), my next vocal input (purple box), and another BNDL.

Figure 38 shows familiar data:  Android Auto version, /mic /mic (green box), and a time stamp:  0x27BABD9068010000 (red box).  This converts to 1548616579623 in decimal, and 01/27/2019 at 14:16:19 in Unix Epoch Time.  As with the previous request, I wasn’t keeping exact time, but this would probably line up with when I said “Go on to the next one.”

Figure 38

Figure 38.

Figure 39 shows the QueryTriggerType string along with webj gearhead* car_assistant string.

Figure 39

Figure 39.

Figure 40 shows that identifier string again, and the vocal input that kicked off this session “read my newest text message.”  I am beginning to suspect this is actually some type of transactional identifier to associate “go on to the next one” with “read my newest text message.”

Figure 40

Figure 40.

Figures 41 and 42 show the new text messages.

Figure 41

Figure 41.

Figure 42

Figure 42.

There is another identifier string in Figure 43:  jwNOXMv_Ne_B_QbewpK4CQ (blue box).  This format is completely new compared to the previous ones.  Additionally, the velvet:query_state:search_result_id and the 16-byte string ending in 0x12 (orange box) appears, a BNDL (red box), along with the 8-byte string (green box) my next vocal input (purple box), and a BNDL.

Figure 43

Figure 43.

Figure 44 shows the Android Auto version, /mic /mic (blue box), and another time stamp (red box).  This time stamp is 0xC1ECBD9068010000, which converts to 1548616592577.  This is 01/27/2019 at 14:16:32 (EST).  This probably coincides with my vocal input “That’s it for now.” 

Figure 44

Figure 44.

Figure 45 has the QueryTriggerType and webj gearhead* car_assistant.

Figure 45

Figure 45.

Figure 46 shows a few things.  The first is the velvet:query_state:search_result_id and the 16-byte string ending in 0x12 (orange box).  The second thing is another identifier string, MgNOXNWTAtGp5wL03bHACg (blue box).  As before, this format does not match anything I have seen previously.

The third, and the most interesting part, is the start of the second session.  The only dividing line here is the velvet:query_state:search_result_id and the 16-byte string ending in 0x12, and BNDL (red box).  The green box is the 8-byte string, and the purple box contains my vocal input, “read my newest text messages.”  The purple boxes below are variations of what I said.

Figure 46

Figure 46.

Figure 47 shows the Android Auto version string, the /mic /mic string (blue box), and another time stamp (red box).  This time the stamp is 0x8C28C09068010000.  This converts to 1548616738956 in decimal, which is 01/27/2019 at 14:18:58 (EST) in Unix Epoch Time, which is the time I invoked Google Assistant for the second session.

Figure 47

Figure 47.

The next string that appears is the QueryTriggerType and webj gearhead* car_assistant strings.  See Figure 48.

Figure 48

Figure 48.

The next string is another identifier string.  This time, it is associated with my newest vocal input:  “read my newest text messages.” (blue box)  The string is GJgROXL2qNeSIggfk05CQCg (green box).  See Figure 49.

Figure 49

Figure 49.

Figures 50 and 51 show the text messages.

Figure 50

Figure 50.

Figure 51

Figure 51.

The next thing I see is a unique identifier string:  JgROXL2qNeSIggfk05CWCg (orange box).  This string is the same as the previous one (in Figure 49), but without the leading “G.”  This behavior is the same that I saw in the first session.  Beyond that there is the velvet:query_state:search_result_id and the 16-byte string ending in 0x12 (blue box), a BNDL (red box), the 8-byte string (green box), and my next vocal input, “hear it” (purple box), and another BNDL.  See Figure 52.

Figure 52

Figure 52.

Figure 53 shows the Android Auto version string, the /mic /mic string (blue box), and another time stamp (red box).  This time the stamp is 0x153AC09068010000.  This converts to 1548616743445 in decimal, which is 01/27/2019 at 14:19:03 (EST) in Unix Epoch Time, which would coincide with my vocal input “hear it.”

Figure 53

Figure 53.

The next string that appears is the QueryTriggerType and webj gearhead* car_assistant strings.  See Figure 54.

Figure 54

Figure 54.

Scrolling a bit finds an identifier string I have seen before:  GJgROXL2qNeSIggfk05CQCg (green box).  This is the first identifier seen in this session (the second one).  Just below it is the vocal input that started this session:  “read my newest text messages” (red box).  See Figure 55.

Figure 55

Figure 55.

Figures 56 and 57 show the messages that were new.

Figure 56

Figure 56.

Figure 57

Figure 57.

Figure 58 shows a pattern I have seen before.  First is another identifier string:  MAROXPPhAcvn_QaPpI24BA (orange box).  The second and third are velvet:query_state:search_result_id and the 16-byte string ending in 0x12 (blue box).  There is another BNDL (red box), the 8-byte string (green box), my next vocal input (purple box), “I’ll reply”, and another BNDL.  Also note the variations of what I said below my actual input (lower purple boxes).

Figure 58

Figure 58.

Figure 59 shows the Android Auto version string, the /mic /mic string (blue box), and another time stamp (red box).  This time the stamp is 0xD85CC09068010000.  This converts to 1548616752344 in decimal, which is 01/27/2019 at 14:19:12 (EST) in Unix Epoch Time, which would coincide with my vocal input “I’ll reply.”

Figure 59

Figure 59.

Figure 60 shows the next string that appears is the QueryTriggerType and webj gearhead* car_assistant strings.

Figure 60

Figure 60.

The next thing of interest is what is seen in Figure 61.  There is another identifier string, GPQROXMz3L6qOggfWpKeoCw (blue box).  This is not a string we have seen before.  Just below it is the vocal input that started this session (red box).

Figure 61

Figure 61.

I had to scroll quite a bit through some Klingon, but eventually I got to the area in Figure 62.  The red box shows another identifier string:  PQROXMz3L6qOggfWpKeoCw.  This is the same string that we saw in Figure 61, sans the leading “G.”  Again, this behavior is a pattern that we have seen in this particular file.  It causes my suspicion to grow that it is some type of transactional identifier that keeps vocal input grouped together.

Figure 62

Figure 62.

The second thing is velvet:query_state:search_result_id and the 16-byte string ending in 0x12 (blue box).  In Figure 63, there is another BNDL (red box), the 8-byte string (green box), and my next vocal input (the dictated message – purple box).  Also note the variations of what I said below my actual input (lower purple boxes).  Note that each variation is delineated by a character:  B, =, and >.

Figure 63

Figure 63.

Figure 64 shows the Android Auto version string, the /mic /mic string (blue box), and another time stamp (red box).  This time the stamp is 0x2192C09068010000.  This converts to 1548616765985 in decimal, which is 01/27/2019 at 14:19:25 (EST) in Unix Epoch Time, which would coincide with my dictation of a message to Google Assistant.

Figure 64

Figure 64.

Figure 65 shows the next string that appears is the QueryTriggerType and webj gearhead* car_assistant strings.

Figure 65

Figure 65.

The next thing of interest is what is seen in Figure 66.  There is an identifier string that we have seen before, GPQROXMz3L6qOggfWpKeoCw (blue box), and the initial vocal input that started this session (green box).  Again, I am beginning to think this is a method of keeping vocal inputs grouped within the same session.

Figure 66

Figure 66.

Scrolling through yet more Klingon, I find the area shown in Figure 67.  The blue box shows another identifier string:  RwROXPzPAcG5gge9s5n4DQ (red box).  This is a new identifier.  The velvet:query_state:search_result_id string and the 16-byte string ending in 0x12 (orange box) are also present.  There is another BNDL (blue box), the 8-byte string (green box), my next vocal input (purple box), “send it”, and another BNDL.

Figure 67

Figure 67.

Figure 68 shows the Android Auto version string, the /mic /mic string (blue box), and another time stamp (red box).  This time the stamp is 0x16BAC09068010000.  This converts to 1548616776214 in decimal, which is 01/27/2019 at 14:19:36 (EST) in Unix Epoch Time, which would coincide with my instructing Google Assistant to send the message I dictated.

Figure 68

Figure 68.

Figure 69 shows the next string that appears is the QueryTriggerType and webj gearhead* car_assistant strings.

Figure 69

Figure 69.

Figure 70 shows an identifier string, UQROXJ6aGKixggfh64qYDg (orange box), which is new.  Just below it is the velvet:query_state:search_result_id string and the 16-byte string ending in 0x12 (blue box) are also present.  There is another BNDL (red box), the 8-byte string (green box).  Below the 8-byte string is the vocal input that started the third session:  “get me directions to the Smithfield Chicken & BBQ in Warsaw North Carolina” (purple box).  Figure 71 shows the variations of my vocal input, which are identified by J, K, O, P, and M (purple boxes).  Below that is a BNDL.  This is the same data seen in Figure 27.

Figure 70

Figure 70.

Figure 71

Figure 71.

Figure 72 shows the Android Auto version string, the /mic /mic string (blue box), and another time stamp (red box).  This time the stamp is 0x38CCC29068010000.  This converts to 1548616911928 in decimal, which is 01/27/2019 at 14:21:51 (EST) in Unix Epoch Time.  This is the same time stamp seen in Figure 26.

Figure 72

Figure 72.

Figure 73 shows the next string that appears is the QueryTriggerType and webj gearhead* car_assistant strings.

Figure 73

Figure 73.

Scrolling down oh so slightly I find the text of the MP3 file at the beginning of the file (blue box), and the Google Maps URL for the location for which I had asked for directions (green box).  See Figure 74.  Figure 75 shows another identifier string, 1wROXOOFJc7j_Aa72aPAB (orange box).  After the identifier string is the velvet:query_state:search_result_id string.  Additionally, the string with the 16-byte string ending in 0x18 (green box), the 9-byte string (0x01B29CF4AE04120A10 – blue box), and the string and.gsa.d.ssc (red box) appear.

Figure 74

Figure 74.

Figure 75

Figure 75.

Comparisons

So, what is the final score?  If you’re still reading, let’s recap, and include what we found in the 22687.binarypb file.  The differences are in italics:

  1. The beginning of the file (the 3-byte 0x5951EF string and “car_assistant”).
  2. The MP3 audio at the beginning of the file which contains the last audio interaction prior to being sent to a different app (Maps).
  3. BNDL
  4. Android Auto Version along with the /mic /mic
  5. The date/time stamp of when Google Assistant is invoked, which appears just after some padding (0xFF). In the third file, 22687, the time stamp is the time for the third session.
  6. A 5-byte string (0xBAF1C8F803) that appears just before the vocal input appears the first time in a file. This string only appears here, and does not appear elsewhere. In the third file, 22687, this appeared before the first vocal input, which, as it turns out, is the vocal input that started the third session.
  7. An 8-byte string (0x014C604080040200) that appears just before the vocal input, regardless of where and how many times it appears within the file.
  8. Text of the vocal input.
  9. BNDL
  10. Android Auto Version along with the /mic /mic
  11. Another date/time stamp of when Google Assistant was invoked (same as the first).
  12. The string webj gearhead* car_assistant <my vocal input> gearhead (what I actually said). This item only appeared once in 22687 (the inquiry asking for directions).
  13. BNDL
  14. What I have decided to call a “latency time stamp,” although, it may indicate the last time any activity was done via Android Auto (including starting Android Auto) prior to the invocation of Google Assistant. In 22687, this only happened once.
  15. The velvet:query_state:search_result_id string appears along with the 16-byte string ending in 0x12.
  16. Items 7, 8, 9, 10, and 11 recycle.
  17. The velvet:query_state:search_result_id string appears along with the 16-byte string ending in 0x18, which appears at the end of the file.
  18. The 9-byte string 0x01B29CF4AE04120A10 after Item 17.
  19. The string gsa.d.ssc that appears after Item 18.

Conclusions

I feel comfortable enough at this point to draw a few conclusions based on my observations up to this point.

  1. Each binarypb file will start by telling you where the request is coming from (car_assistant).
  2. What is last chronologically is first in the binarypb file. Usually, this is Google Assistant’s response (MP3 file) to a vocal input just before being handed off to whatever service (e.g. Maps) you were trying to use.  The timestamp associated with this is also at the beginning of the file.
  3. A session can be broken down in to micro-sessions. I will call them vocal transactions.
  4. Vocal transactions have a visible line of demarcation by way of the 16-byte string ending in 0x12.
  5. A BNDL starts a vocal transaction, but also further divides the vocal transaction in to small chunks.
  6. The first vocal input in the binarypb file is marked by a 5-byte string: 0xBAF1C8F803, regardless of when, chronologically, it occurred in the session.
  7. Each vocal input is marked by an 8-byte string:   While the 5-byte string appears at the first in the binarypb file only, the 8-byte string appears just prior to each vocal input.
  8. When Google Assistant doesn’t think it understands you, it generates different variations of what you said…candidates…and then selects the one it thinks you said.  You can see thise in these files.
  9. In sessions where Google Assistant needs to keep things tidy, it will assign an identifier to vocal transactions. There does not appear to be any consistency (as far as I can tell) as to the format of these identifiers.
  10. The end of the final vocal transaction is marked by a 16-byte string ending in 0x18.

Figure 76 shows a visual version of the session files, over all, and Figure 77 shows the vocal transactions portion in more detail.

img_0075

Figure 76.

img_0074

Figure 77.

What’s Next?

So, there is definitely a format to these files, and I believe there are enough markers that someone could create a tool to parse them to make them easier to read and examine.  They contain what a device owner said…what they actually said…to their car/device.  This data could be extremely valuable to examiners and investigators, regardless of the venue in which they operate (civil or criminal).

Yes, this data could potentially be in a user’s Google Takeout data, and getting it that way would be slightly easier, although there would be a waiting period.  But, what if you do not have the authority to get said data?  This data could still be resident on the device and, potentially, available.

If I had scripting skills, I would try to see if something could be written to parse these files; however, alas, I do not.  I have minimal scripting skills.  Most of what I do is via Google and Cntrl/Cmd-C and Cntrl/Cmd-V.  If any reader can do this, and is willing, please let me know and I will post your product here and give full credit.  It would be awesome.

The second part of this post is forthcoming.  If you can’t wait, here’s a sneak peek:  there are some similarities…