Wickr. Alright. We’ll Call It A Draw.

Ugh.  Not again.

Portions of this blog post appeared in the 6th issue of the INTERPOL Digital 4n6 Pulse newsletter. 

I would like to thank Heather Mahalik and Or Begam, both of Cellebrite, who helped make the Android database portion of this blog post possible, and Mike Williamson of Magnet Forensics for all the help with the underpinnings of the iOS version.

I have been seeing the above pop-up window lately.  A  lot.  Not to pick on any particular tool vendor, but seeing this window (or one simmilar to it) brings a small bit of misery.  Its existence means there is a high probability there is data on a particular mobile device that I am not going to get access to, and this is usually after I have spent a considerable amount of time trying to gain access to the device itself.  Frustrating.

One of my mentors from my time investigating homicides told me early in my career that I was not doing it right unless I had some unsolved homicides on the books; he felt it was some sort of badge of honor and showed a dedication to the craft.  I think there should be a similar mantra for digital forensic examiners.  If you conduct digital forensic examinations for any substantial amount of time you are going to have examinations where there is inaccesible data and nothing you do is going to change that.  You can throw every tool known to civilization at it, try to manually examine it, phone a friend, look on Twitter, search the blog-o-sphere, search the Discord Channel, query a listserv, and conduct your own research and you still strike out.  This is a reality in our discipline.

Not being able to access such data is no judgment of your abilities, it just means you may not win this round.  Keep in mind there is a larger fight, and how you react to this setback is a reflection of your dedication to our craft.  Do you let inaccessible data defeat you and you give up and quit, or do you carry on with that examination, getting what you can, and apply that same tenacity to future examinations?

One needs the latter mindset when it comes to Wickr.  For those that are unfamilar, Wickr is a company that makes a privacy-focused, ephemeral messaging application. Initially available as an iOS-only app, Wickr expanded their app to include Android, Windows, macOS, and Linux, and branched out from personal messaging (Wickr Me) to small teams and businesses (Wickr Pro – similar to Slack), and an enterprise platform (Wickr Enterprise).  Wickr first hit the app market in 2012, and has been quietly hanging around since then.  Personally, I am surpised it is not as popular as Signal,  but I think not having Edward Snowden’s endorsement and initially being secretive about its protocal may have hurt Wickr’s uptake a bit.

Regardless, this app can bring the pain to examinations.

Before I get started, a few things to note.  First, this post encompasses Android, iOS, macOS, and Windows.  Because of some time constraints I did not have time to test this in Linux.  Second, the devices and respective operating system versions/hardware are as follows:

Platform                    Version                 Device                                                   Wickr Version

Android                      9.0                         Pixel 3                                                    5.22

iOS                               12.4                       iPhone XS and iPad Pro 10.5            5.22

macOS                        10.14.6                  Mac Mini (2018)                                  5.28

Windows 10 Pro       1903                      Surface Pro                                           5.28

Third, Wickr Me contains the same encryption scheme and basic functionality as the Pro and Enterprise versions: encrypted messaging, encrypted file transfer, burn-on-read messages, audio calling, and secure “shredding” of data. The user interface of Wickr Me is similar to the other versions, so the following sections will discuss the functionality of Wickr while using the personal version.

Finally, how do you get the data?  Logical extractions, at a minimum, should grab desktop platform Wickr data during an extraction.  For Android devices, the data resides in the /data/data area of the file system, so if your tool can get to this area, you should be able to get Wickr data.  For iOS devices, you will need a jailbroken phone or an extraction tool such as one that is metal, gray, and can unlock a door to get the Wickr database.  I can confirm that a backup nor a logical extraction contains the iOS Wickr database.

Visual Walkaround

Wickr is available on Android, iOS, macOS, and Windows, and while these platforms are different, the Wickr user interface (UI) is relatively the same across these platforms. Figure 1 shows the Windows UI, Figure 2 shows the macOS UI, Figure 3 shows the iPhone, and Figure 4 shows the iPad. The security posture of the Wickr app on Android prevents screenshots from being taken on the device, so no UI figure is available.  Just know that it looks very similar to Figure 3.

Figure 1
Figure 1.  Wickr in Windows.
Figure 2.png
Figure 2.  Wickr in macOS.


Figure 3.png
Figure 3.  Wickr on iPhone.
Figure 4
Figure 4.  Wickr on iPad.

Each figure has certain features highlighted. In each figure the red box shows the icons for setting the expiration timer and burn on read (setting that allows the sender of a message to set a self-destruction timer on a message before it is sent – the recipient has no control over this feature), the blue arrow shows the area where a user composes a message, the orange arrow shows the area where conversations are listed, and the purple arrow shows the contents of the highlighted conversation (chosen in the conversations list).  Not highlighted is the phone icon seen in upper right corner of each figure. This initiates an audio call with the conversation participant(s).

The plus sign seen in the screen (red boxes) reveals a menu that has additional options: send a file (including media files), share a user’s location, or use one of the installed quick responses. Visually, the menu will look slightly different per platform, but the functionality is the same.  See Figure 5.

Figure 5
Figure 5.  Additional activity options (Windows UI).

The sending and receiving of messages and files works as other messaging applications with similar capabilities. Figure 6 shows an active conversation within the macOS UI.

Figure 6.png
Figure 6.  A conversation with a text message and picture attachments (macOS UI).

Wickr is similar to Snapchat in that messages “expire” after a set period of time. The default time a message is active is six (6) days, which is the maximum amount of time a message can be available, but a user can set message retention times as short as one second. This setting is device specific; if a user has multiple devices they can choose different retention periods for each device.

Users can also set “burn-on-read” times in which a message will expire (“burn”) after a certain period of time after the message has been read. This setting is controlled by the message sender, regardless of the recipient’s message retention period setting.  The retention period for burn-on-read messages can also be set anywhere between 1 second and 6 days. Figure 7 shows the Windows Wickr UI when a burn-on-read message has been received and opened (bottom of the active conversation window pane), and Figure 8 shows the same UI after the burn-on-read retention period expired.

Figure 7
Figure 7.  A burn-on-read message (timer in red).
Figure 8
Figure 8.  Poof! The message has been burned.

The Secure Shredder function is Wickr’s feature by which data that has been deleted by the app is rendered unrecoverable by overwriting the deleted data.  Secure Shredder is an automated feature that runs in the background but has a manual configuration feature if a user is part of the Wickr Pro Silver or Gold tiers, which allows users to manually initiate the function.  Testing showed this feature automatically runs every +/- one (1) minute while the device is idle.

Encryption.  All Of The Encryptions.

Wickr is designed with total privacy in mind, so all three versions use the same encryption model. The app not only protects messages, media, and files in transit, but it also protects data at rest. The app has been designed with perfect forward secrecy; if a user’s device is compromised, historical communications are still protected unless the attacker has the user’s password and the messages have not expired.

When a new message is received, it arrives in a “locked” state.  See Figure 9.

Figure 9.png
Figure 9.  A new message.

When a message is sent, the sender’s device will encrypt the message using a symmetric key. To generate the symmetric key, internal APIs gather random numbers which are then run through the AES-256 cryptographic algorithm in Galois/Counter Mode (GCM). Each message is encrypted using a new symmetric key, and this operation occurs strictly on the sender’s device. This encryption happens regardless of whether the message contains text, a file, or a combination of the two. The cipher text and the symmetric key (i.e. the package) are encrypted using the signed public key of the recipient’s device (I’ll discuss asymmetric operations in a minute), and then sent to the recipient who then decrypts the package using their private key. The symmetric key is then applied to the cipher text in order to decrypt it.

The takeaway here is that unlocking a received message = decrypting a received message.  A user may set their device to automatically unlock messages, but the default behavior is to leaved them locked on receipt and manually initiate the unlock.

Asymmetric operations are applied to messages in transit.  As previously mentioned, cipher text and the symmetric key used encrypt it are packaged up and encrypted using the public key of the intended recipient’s device.  The public key is signed with components from said device.  The recipient device uses the corresponding private key to decrypt the package, and then the symmetric key is used to decrypt the cipher text  (unlocking the message) so the recipient can read it. If a message is intended for multiple recipients or for a recipient who has multiple devices, a different set of keys is used for each destination device.

Here is where the pain starts to come. The keys used in the asymmetric operations are ephemeral; a different set of public/private key pairs are used each time a message is exchanged between devices. Wickr states in its technical paper that pools of components (not the actual keys themselves) of private-public pairs are created and refreshed by a user’s device while they are connected to Wickr’s servers.  If a device is disconnected from the Wickr servers, it will use what key pairs it has, and will then refresh its pool once it has re-established the connection.

Even if a private key is compromised, the only message that can be decrypted is the one that corresponds to that specific private/public key pair; the rest of the messages are still safe since they use different pairs.

But wait, it gets worse.  Just to turn the knife a bit more, Wickr has a different encryption scheme for on-device storage that is separate from message transfers.  When Wickr is first installed on a device a Node Storage Root Key (Knsr) is generated. The Knsr is then applied to certain device data (described as “device specific data and/or identifiers derived from installed hardware or operating system resources that are unique, constant across application installs but not necessary secret“) to generate the Local Device Storage Key (Klds). The Klds is used to encrypt Wickr data stored locally on the device, including files necessary for Wickr to operate.

The Klds is itself encrypted using a key derived from the user’s password being passed through scrypt. When a user successfully logs in to the Wickr app, the Klds is decrypted and placed into the device’s memory, allowing for successful exposure of the locally stored Wickr data through the app UI. When the app is terminated, placed in an inactive state, or a user logs out, the Klds is removed from memory, and the Wickr data is no longer available.

For those who have legal processes at their disposal (court orders, search warrants, & subpoenas), the news is equally dire.  Wickr does keep undelivered messages on their servers for up to six (6) days, but, as I previously mentioned, the messages (which are in transit) are encrypted.  Wickr states they do not have acess to any keys that would decrypt what messages are stored.  There is some generic account and device information, but no message content.  For more information on what little they do have, please read their legal process guide.

So, Is There Anything I Can Actually Get?

The answer to this question is the standard digital forensics answer:  “It depends.” The encryption scheme combined with the on-device security measures makes it extremely difficult to recover any useful data from either the app or Wickr, but there is some data that can be retrieved, the value of which depends on the goal of the examination.

Testing has shown a manual examination is the only way, as of the time of this post, to recover message content from iOS, macOS, and Windows (files not included). This requires unfettered access to the device along with the user’s Wickr password. Due to a change in its encryption scheme (when this happened is unknown), Wickr is not supported by any tool I tested on any platform, which included the current versions of Cellebrite, Axiom, and XRY. This included the Android virtualization options offered by two mobile vendors.  Along those same lines, I also tried Alexis Brignoni’s virtualization walkthrough using Nox Player, Virtual Box, and Genymotion, with no luck on all three platforms.

Things can be slightly different for those of you who have Wickr deployed in an enterprise environment.  The enterprise flavor of Wickr does have compliance (think FOIA requests and statutory/regulatory requirements) and eDiscovery features, which means message content may be retained so as long as the feature is enabled (I didn’t have access to this version so I could not ascertain if this was the case).  Just be aware that if the environment includes Wickr, this may be an option for you.

The type and amount of data an examiner can possibly get is dependant upon which platform is being examined. The nice thing is there is some consistency, so this can help examiners determine, rather quickly, if there is anything to be recovered.  The consistencey can be broken up in to two categories:  iOS and Android/macOS/Windows. One thing is consistent across ALL platforms, though: an examiner should not expect to find any message content beyond six (6) days from the time of examination.


The most important thing to remember for Android, macOS, and Windows platforms is that order matters when doing a manual examination. That is, the order in which you examine the device for Wickr content is important. Failure to keep this mind may result in recoverable data being deleted unnecessarily.  Android can be slightly different, which I will discuss shortly.

I will go ahead and get one thing out of the way with all three platforms: the databases containing account information, conversation information, contacts, and message content are all encrypted.  The  database is protected with SQL Cipher 3, and the Wickr user password is not the password to the database (I tried to apply the correct Wickr password in DB Browser for SQLite, and none would open – you’ll see why below). Figure 10 shows the macOS database in hexadecimal view and Figure 11 shows the Windows database.  While not shown here, just know the Android database looks the same.

Figure 10.png
Figure 10.  Wickr database in macOS
Figure 11.PNG
Figure 11.  Wickr database in Windows.

You may have noticed the file name for both macOS and Windows is the same:  wickr_db.sqlite.  The similarities do not stop there.  Figure 12 shows the location of the database in macOS, Figure 13 shows the database’s location in Windows.

Figure 12
Figure 12.  Home in macOS.  ~/Users/%UserName%/Library/ApplicationSupport/Wickr, LLC/WickrMe/
Figure 13
Figure 13.  Home in Windows.  C:\Users\%UserName%\AppData\Local\Wickr, LLC\WickrMe\

As you can see, most of the file names in each home directory are the same.  Note that the last folder in the path, “WickrMe,” may be different depending on what version is installed on the device (Wickr Me, Wickr Pro, Enterprise), so just know the last hop in the path may not be exactly the same.

Interesting note about the “preferences” file in Windows:  it is not encrypted.  It can be opened, and doing so reveals quite a bit of octet data.  The field “auid” caught my attention, and while I have a theory about its value, I’ll save it for another blog post.

For Android, the directory and layout should look familar to those who examine Android devices.  The database file, wickr_db, sits in the databases folder.  See Figure 14.

Figure 14
Figure 14.  Home in Android.  /data/data/com.mywickr.wickr2

If you will recall, when a user unlocks a message it is actually decrypting it.  This also applies to files that are sent through Wickr.  Unlike messages, which are stored within the database, files, both encrypted and decrypted, reside in the Wickr portion of the file system.  When a message with a file unlocked, an encrypted version is created within the Wickr portion of the file system.  When the file is opened (not just unlocked), it is decrypted, and a decrypted version of that file is created within a different path within the Wickr portion of the file system.  Figure 15 shows the Android files, Figure 16 shows the macOS files, and Figure 17 shows the Windows files.  The top portion of each figure shows the files in encrypted format and the bottom portion of the figure shows the files in decrypted format.

Figure 15.png
Figure 15.  Encrypted/decrypted files in Android.
Figure 16
Figure 16.  Encrypted/decrypted files in macOS.
Figure 17
Figure 17.  Encrypted/decrypted files in Windows.

When a file is sent through Wickr it is given a GUID, and that GUID is consistent across devices for both the sender and the recipient(s).  In the figures above, Android represents Test Account 1 and macOS/Windows represents Test Account 2, so you will notice that the same GUIDs are seen on both accounts (all three platforms).

The fact an encrypted version of a file exists indicates a device received the file and the message was unlocked, but doesn’t necessarily indicate the file was opened.  It isn’t until the user chooses to open the file within the Wickr UI that a decrypted version is deposited onto the device as seen above.  An example of the open dialogue is seen in Figure 18.  The triple dots in the upper righthand corner of the message bubble invokes the menu.

Figure 18.png
Figure 18.  Open dialogue example (Windows UI).

If a picture is received and the message is unlocked, then a thumbnail is rendered within the Wickr UI message screen, as seen in Figure 18, but this doesn’t deposit a decrypted version of that picture; the user must open the file.  Any other file type, including vidoes, merely display the original file name in the Wickr UI.  A user will have to open the file in order to view its contents.

The directories for files on each platform is as follows (path starts in the Wickr home directory):

Platform                                   Encrypted Files                                    Decrypted Files

Android                                    ~/files/enc                                             ~/cache/dec

macOS                                      ~/temp/attachments                           ~/temp/preview

Windows                                 ~/temp/attachments                           ~/temp/preview

This behavior applies to files both sent and received.  Also keep in mind there is a probability that you may find encrypted files with no corresponding decrypted version.  This may be due to the message retention time expiring, which is why the order of examination is important, or it may mean the user never opened the file.

For both macOS and Windows, the only way to recover message content is via a manual examination using the Wickr UI, which means that a logical image should contain the sought after data.  However, the order of your examination can impact your ability to recover any decrypted files that may be present on the device.  Since the Wickr application is keeping track of what files may have passed their message retention period, it is extremely important to check for decrypted files prior to initiating Wickr on the device for a manual examination.  Failure to do so will result in any decrypted file whose message retention time has expired being deleted.

The Android database.  Slightly different

While the databases for macOS and Windows are inaccessible, the story is better for Android.  While conducting research for this post I discovered Cellebrite Physical Analyzer was not able to decrypt the wickr_db database even though it was prompting for a password.  Cellebrite confirmed Wickr had, in fact, changed their encryption scheme and Physical Analyzer was not able to decrypt the data.  A short time later they had a solution, which allowed me to proceed with this part of the post.  While not currently available to the public, this solution will be rolled out in a future version of Physical Analyzer.  Fortunately, I was granted pre-release access to this feature.

Again, thank you Heather and Or.  🙂

While there is still a good deal of data within the wickr_db file that is obfuscated, the important parts are available to the examiner, once decrypted.  The first table of interest is “Wickr_Message.”  See Figure 19.

Figure 19.png
Figure 19.  Wickr_Message table.

The blue box is the timestamp for sent and received message (Unix Epoch) and the orange box contains the text of the message or the original file name that was either sent or received by the device.  The timestamp in the red box is the time the message will be deleted from the Wickr UI and database.  The values in the purple box are inteteresting.  Based on testing, each file I sent or received had a value of 6000 in the messageType column.  While not that important here, these values are important when discussing iOS.

The blobs in the messagePayload column are interesting in that they contain a lot of information about file transfers between devices.  See Figure 20.

Figure 20.png
Figure 20.  Message payload.

The file sender can be seen next to the red arrow, the file type (e.g., picture, document, etc.) is next to the blue arrow, the GUID assigned to the file is seen in green box.  The GUID values can be matched up to the GUIDs of the files found in the /enc and /dec folders.  Here, the GUID in the green box in Figure 20 can be seen in both folders in Figure 21.  Finally, you can see the original name of the file next to the purple arrow (iOS_screenshot).  The original file name also appears in the cachedText colum in Figure 19.

Figure 21
Figure 21.  Matching GUIDs

The orange box in Figure 20 contains the recipients username along with the hash value of the recipient’s Wickr User ID.  That value can be matched up to the value in the senderUserIDHash column in the same table (see the red box in Figure 22).  The title of this column is deceptive, because it isn’t actually the userID that is represented.

Figure 21
Figure 22.  Sender’s hashed ID

Figure 23 shows the same hash in the serverIDHash column in the table Wickr_ConvoUser table.

Figure 23
Figure 23.  Same IDs.

Also of note in this table and the one seen in Figure 22 is the column vGroupID.  Based on testing, it appears every conversation is considered to be a “group,” even if that group only has two people.  For example, in my testing I only had my two test accounts that were conversing with each other.  This is considered a “group,” and is assigned a GUID (seen in the blue box).  The good thing about this value is that it is consistent across devices and platforms, which could come in handy when trying to track down conversation participants or deleted conversations (by recovering it from another device).  An example of this cross-platform-ing is seen in Figure 24, which shows the table ZSECEX_CONVO from the Wickr database in iOS.  Note the same GroupID.

Figure 24
Figure 24.  Same group ID.

Figure 25 shows, again, the serverIDHash, but this time in the table Wickr_User.  It is associated with the value userIDHash.  The value userAliasHash (the same table) is seen in Figure 26.

Figure 25
Figure 25.  serverIDHash and the userIDHash.
Figure 26
Figure 26.  userIDHash (Part 2).

Figure 27 shows some telemetry for the users listed in this table.

Figure 27
Figure 27.  User telemetry.

The columns isHidden (purple box) and lastMessaged (red box) are self-explanatory.  The value of 1 in the isHidden column means the user does not appear in the Conversations section of the UI.  That value coupled with the value of 0 in the lastMessaged column indicates this row in the table probably belongs to the logged in account.

The lastRefreshTime column (blue box) has the same value in both cells.  The timestamp in the cell for row 1 is when I opened the Wickr app, which, undoubtedly, caused the app to pull down information from the server about my two accounts.  Whether this is what this value actually represents requires more testing.  The same goes for the values in lastActivityTime column (orange box).  The value seen in the cell in row 1 is, based on my notes, the last time I pushed the app to the background.  The interesting thing here is there was activity within the app the after the timestamp (the following day around lunch time PDT).  More testing is required in order to determine what these values actually represent.  For now, I would not trust lastActivityTime at face value.

The table Wickr_Settings contains data of its namesake.  The first column of interest is appConfiguration (red box).  See Figure 28.

Figure 28.PNG
Figure 28.  Wick_Settings

The data in this cell is in JSON format.  Figure 29 shows the first part of the contents.

Figure 29
Figure 29.  JSON, Part 1.

There are two notable values here.  The first, in the blue box, is self explanatory:  locationEnabled (Wickr can use location services).  I let Wickr have access to location services during initial setup, so this value is set to ‘true.’  The value in the red box, alwaysReauthenticate, refers to the setting that determines whether or not a user has to login into Wicker each time the app is accessed.  It corresponds to the switch in the Wickr settings seen in Figure 30 (red box).

Figure 30.PNG
Figure 30.  Login each time?  No thanks.

Because I didn’t want to be bothered with logging in each time, I opted to just have Wickr save my password and login automatically each time, thus this value is set to ‘false.’  If a user has this set and does not provide the Wickr password, a manual examination will be impossible.

The rest of the contents of the JSON data are unremarkable, and is seen in Figure 31.

Figure 31
Figure 31.  JSON, Part 2.  Nothing much.

There are three additional columns that are notable in this table.  The first is the setting for the Secure Shredder, autoShredderEnabled.  This value is set to 1, which means that it is enabled.  I would not expect to see any other value in this cell as Secure Shredder runs automatically in Wickr Me and some tiers of the Wickr Pro versions; there is no way to disable it unless the Silver, Gold, or Enterprise versions of Wickr is present.  See Figure 32.

Figure 32
Figure 32.  Anonymous Notifications, Secure Shredder, and Auto Unlock.

The second notable column is unlockMessagesEnabled (red box).  As its name implies, this setting dictates whether a message is unlocked on receipt, or if a user has to manually initiate the unlock.  I took the default setting, which is not to unlock a received message (database value of 0).  Figure 33 shows the setting in the Wickr Settings UI.

Figure 33
Figure 33.  Message Auto Unlock switch.

Figure 32 also shows anonymousNotificationsEnabled (orange box).  This setting dictates whether Wickr notifications provide any specific information about a received message/file (e.g., sender’s user name, text of a message, file name), or if the notification is generic (e.g., “You have a new message”).  Again, the default is to show generic notifications (database value of 1).  Figure 34 shows the setting in the Wickr Settings UI.  Note the switch is off, but since I have Auto Unlocks disabled, this switch is not used because my messages are not automatically unlocked on receipt.

Figure 34.PNG
Figure 34.  Anonymous Notification setting.

I want to address one last table;  Wickr_Convo.  Using the conversation GUIDs, you can determine the last activity within each conversation that is stored on the device.  In Figure 35, this conversation GUID is the same as the ones seen in Figure 23 and 25.

Figure 35
Figure 35.  Conversations listed by GUID.

There are two values that are notable.  The first is is the lastOutgoingMessageTimestamp (red box). That is a pretty self-explanatory label, right?  Not quite, and examiners should be careful interpreting this value.  That same timestamp appears in the Wickr_Message table seen in Figure 37, but with a different label.

Figure 36
Figure 36.  Wickr_Convo table timestamps.
Figure 37.png
Figure 37.  Wickr_Message table timestamps.

It appears that the lastOutgoingMessageTimestamp from Wickr_Convo applies to the last message that did not involve a file transfer (value timestamp seen in the Wickr_Message table) .  The value lastUpdatedTimestamp (blue box in Figure 36) actually represents the last communication (message or file transfer) in the conversation, which is seen in the blue-boxed timestamp value in the Wickr_Message table (blue box in Figure 37).

The value messageReadTimestamp (orange box in Figure 36) represents the time the last message was unlocked.  Notice that the value is just about the same as that seen in lastUpdatedTimestamp, but with more granularity.

A couple more things

There are two more files I’d like to touch on with regards to the Android version.  The first is com.google.android.gms.measurement.prefs.xml found in the /shared_prefs folder.  See Figure 38.

Figure 38
Figure 38.  Measurement data for the app.

This file keeps track of certain data about app usage.  The most obvious data points are the install time for the app itself (orange box) and the first time the app was opened (red box).  The next two data poins are app_backgrounded (yellow box) and last_pause_time (green box).  The app_backgrounded value, as you can see, is a boolean value that indicates whether the app is active on the device screen or if the app is running in the background (i.e. not front-and-center on the device).  The value last_pause_time is the last time the app was pushed to the background by the user (“paused”).  If an examiner is pulling this data from a seized device is highly likely that the app_backgrounded value will be true, unless the device is seized and imaged while Wickr is actively being used.

The value in the blue box, last_upload, is a deceiving value, and I have yet to figure out what exactly it represents.  I have a theory that it may be the last time the app uploaded information about its current public key which is used in the asymmetric encryption operations during message transport, but I can not be totally sure at this point.  Just know that last_upload may not necessarily represent the last time a file was uploaded.

The last file is COUNTLY_STORE.xml.  Based on research, it appears this file may be used for analytical purposes in conjunction with the Countly platform.  This file keeps some metrics about the app, including the cell service carrier, platofrm version (SDK version), hardware information, and a unique identifier, which, on Android, is the advertising ID (adid).  The data appears to be broken up into transactions with each transaction containing some or all of the data points I just mentioned. Each transaction appears to be separated by triple colons.  Each also contains a timestamp.

A representitive example can be seen in Figure 39; it does not contain all of the data points I mentioned but it gives you a good idea as what to expect.

Figure 40
Figure 39.  COUNTLY_STORE.xml in Android.

This file is inconsistent.  On some of my extractions the file was empty after app use and on others it was full of data.  Sometimes the timestamps coincided with my being in the app, and others did not.  There does not seem to be enough consistency to definatively say the timestamps seen in this file are of any use to examiners.  If someone has found otherwise please let me know.

There is a iOS equivalent:  County.dat.  This file contains most of the same data points I already described, and while it has a .dat extension, it is a binary plist file.  In lieu of the adid (from Android), a deviceID is present in the form of a GUID.  I think this deviceID serves more than one purpose, but that is speculative on my part.

Speaking of iOS…

iOS is different. Of course it is.

The iOS version of Wickr behaves a little differently, probably due to how data is naturally stored on iOS devices.  The data is already encrypted, and is hard to access.  The two biggest differences, from a forensic standpoint, are the lack of decrypted versions of opened files, and the database is not encrypted.

Before I proceed any further, though, I do want to say thank you again to Mike Williamson for his help in understanding how the iOS app operates under the hood.  🙂

I searched high and low in my iOS extractions, and never found decrypted versions of files on my device.  So there are two possible explanations:  1, they are in a place I didn’t look (highly unlikely but not completely impossible), or 2, they are never created in the first place.  I’m leaning towards the latter.  Regardless, there are no decrypted files to discuss.

Which leaves just the database itself.  While it is not encrypted, a majority of the data writtent to the table cells is encrypted.  I will say I am aware of at least two mobile device forensic vendors, who shall not be named at this time, that will probably have support for Wickr on iOS in the near future.  In the meantime, though, we are left with little data to review.

The first table is ZWICKR_MESSAGE, and, as you can guess, it contains much of the same data as the Wickr_Message table in Android.  Remember when I mentioned the messageType value in Android?  In iOS that value is ZFULLTYPE.  See Figure 40.

Figure 38
Figure 40.  Type 6000.

The value of 6000 is seen here, and, as will be seen shortly, correspond to files that have been sent/received.  Also, note the Z_PK values 8 and 10, respectively, because they will be seen in another table.

Figure 41 shows some additional columns, the titles of which are self-explanatory.  One I do want to highlight, though, is the ZISVISIBLE column.  The two values in red boxes represent messages I deleted while within the Wickr UI.  There is a recall function in Wickr, but I was not able to test this out to see if this would also place a value of 0 in this column.

Figure 39.png
Figure 41.  Deleted message indicators.

Figure 42 shows another set of columns in the same table.  The columns ZCONVO and Z4_CONVO actually come from a different table within the database, ZSECEX_CONVO.  See Figures 42 and 43.

Figure 40.png
Figure 42.  Conversation and Calls.
Figure 41
Figure 43.  ZSECX_CONVO table.

In Figure 42 the two columns highlighted in the orange box, ZLASTCALLCONVO and Z4_LASTCALLCOVO, appear to keep track of calls made via Wickr; in my case these are audio calls.  Here, the value indicates the last call to take place, and what conversation it occured in.  This is interesting since the Android database did not appear to keep track of calls as far as I could tell (the data may have been encrypted).  Remember, this table is equivalent to the Wickr_ConvoUser table in the Android database, so you will be able to see the ZVGROUPID, shortly.

The next bit of the table involves identifying the message sender (ZUSERSENDER), the timestamp of the message (ZTIMESTAMP), the time the message will expire (ZCLEANUPTIME), and the message identifier (ZMESSAGEID).  The timestamps in this table are stored in Core Foundation Absolute Time (CFAbsolute).  See Figure 44.

Figure 42.png
Figure 44.  Messages and their times.

The values in the ZUSERSENDER column can be matched back to the Z_PK column in the ZSECX_USER table.

That’s it for this table!  The rest of the contents, including the ZBODY column, are encrypted.

The ZSECX_CONVO table has some notable data as seen in Figure 45.  The one column I do want to highlight is ZLASTTIMESTAMP, which is the time of the last activity (regardless of what it was) in the conversation (the “group”).  Interestingly, the times here are stored in Unix Epoch.

Figure 43.png
Figure 45.  Last time of activity in a conversation (group).

Figure 46 shows some additional data.  The last conversation in which a call was either placed or received is seen in the column ZLASTCALLMSG (orange box – timestamp can be gotten from the ZWICKR_MESSAGE table), along with the last person that either sent/received anything within the conversation  (ZLASTUSER – red box). The value in the ZLASTCALLMSG column can be matched back to the values in the Z_PK column in the ZWICKR_MESSAGE table.  The value in the ZLASTUSER column can be matched back to the Z_PK column in the ZSECX_USER table. And, finally, as I previously showed in Figure 24, the ZVGROUPID (blue box).

Figure 44
Figure 46.  The last of the ZSECX_CONVO table.

The table ZSECEX_USER, as seen in Figures 47 and 48, contains data about not only the account owner, but also about users who the account holder may be conversing with.  The table contains some of the same information as the Wickr_User table in Android.  In fact, Figure 47 looks very similar to Figure 27.  The values represent the same things as well.

Figure 47
Figure 47.  Hidden status and last activity time.

Figure 48 shows the same items as seen in Figure 26, but, as you can see, the hash values are different, which makes tracking conversation participants using this informaiton impossible.

Figure 48
Figure 48.  Same participants, different hashes.

File transfers in iOS are a bit tricky because some of the data is obfuscated, and in order to figure out which file is which an examiner needs to examine three tables:  Z_11MSG, ZWICKR_MESSAGE, and ZWICKR_FILE.  Figure 49 shows the Z_11MSG table.

Figure 49
Figure 49.  Z_11MSG.

The colum Z_13MSG refers to the ZWICKR_MESSAGE table, with the values 8 and 10 referring to values in the Z_PK column in that table.  See Figure 50.

Figure 50
Figure 50.  Trasferred files.

Obviously, associated timestamps are found in the same row further into the table.  See Figure 51.

Figure 51
Figure 51.  Timestamps for the transferred files.

The column Z_11Files in Figure 49 refers to the ZWICKR_FILE table.  See Figure 52.

Figure 52
Figure 52.  Files with their GUIDs.

The values in Z_11FILES column in Figure 49 refer the values in the Z_PK values seen in Figure 52.  Figure 53 shows the files within the file system.  As I previously mentioned, there are no decrypted versions of these files.

Figure 53
Figure 53.  The file GUIDs from the database table.

Figure 54 shows values ZANONYMOUSNOTIFICATION and ZAUTOUNLOCKMESSAGES values  from the ZSECEX_ACCOUNT table (the Android values were seen in Figure 32).  Both values here are zero meaning I had these features turned off.

Figure 54
Figure 54.  Anonymous Notification and Auto Unlock settings in iOS.

The last table I want to highlight is the ZSECX_APP table.  See Figure 55.

Figure 55
Figure 55.  Users and their associated app installations.

The values in the ZUSER column relate back to the values seen in the Z_PK column in the ZWICKR_USER table.  Each different value in the ZAPPIDHASH represents a different app install on a device.  For example, Test Account 1 appeared on four different devices (iPhone, iPad, Windows, macOS).  This means four different devices each with their own individual installation of Wickr, which translates to a different ZAPPIDHASH value for each individual device.  Knowing a user has multiple devices could be beneficial.  Warning:  be careful, because this isn’t the only way to interpret this data.

As part of the testing, I wanted to see if this value could change on a device, and, as it turns out, it can.  Test Account 2 was only logged in on the Pixel 3.  I installed the app, used it, pulled the data, wiped the Pixel and flashed it with a new install of Android, and then reinstalled Wickr.  I repeated those steps one more time, which means Wickr was installed on the same device three different times, and, as you can see, there are three different hash values for ZUSER 2 (Test Account 2).

The morale of this story is that while this value can possibly represent different devices where a user may be logged in, it actually represents instances of app installation, so be careful in your interpretation.


Wickr is a tough one.  This app presents all sorts of forensic challenges.  At the moment there is very little data that is recoverable, but some insights about communication and app usage can be gleaned from what little data is available.  Sometimes, files can be recovered, and that may be all an examiner/investigator needs.

The good news is, though, there is help on the horizon.

Google Assistant Butt Dials (aka Accidental & Canceled Invocations)

Last week I was at DFRWS USA in Portland, OR to soak up some DFIR research, participate in some workshops, and congregate with some of the DFIR tribe. I also happen to be there to give a 20 minute presentation on Android Auto & Google Assistant.

Seeing how this was my first presentation I was super nervous and I am absolutely sure it showed (I got zero sleep the night before). I also made the rookie mistake of making WAY more many slides than I had time for; I do not possess that super power that allows some in our discipline to zip through PowerPoint slides at superhuman speeds. The very last slide in the deck had my contact information on it which included the URL for this blog. Unbeknownst to me, several people visited the blog shortly after my presentation and read some of the stuff here. Thank you!

As it turns out, this happened to generate a conversation. On one of the breaks someone came up to me and posed a question about Google Assistant. That question led to other conversations about Assistant, and another question was asked: what happens when a user cancels whatever action they wanted Google Assistant to do when they first invoked it?

I had brought my trusty Pixel 3 test phone with me on this trip for another project I am working on, so I was able to test this question fairly quickly with a pleasantly surprising set of results. The Pixel was running Android Pie with a patch level of February 2019 that had been freshly installed a mere two hours earlier. The phone was not rooted, but did have TWRP (3.3.0) installed, which allowed me to pull the data once I had run my tests.

The Question

Consider this scenario: a user not in the car calls on Google Assistant to send a text message to a recipient. Assistant acknowledges, and asks the user to provide the message they want to send. The user dictates the message, and then decides, for whatever reason they do not want to send it. Assistant reads the message back to the user and asks what the user wants to do (send it or not send it). The user indicates they want to cancel the action, and the text message is never sent.

This is the scenario I tested. In order to envoke Google Assistant I used the Assistant button in the right side of the Google Quick Search bar on the Android home screen. My dialogue with Google Assistant went as follows:

Me: OK, Google. Send a message to Josh Hickman

GA: Message to Josh Hickman using SMS. Sure. What’s the message?

Me: This is the test message for Google Assistant, period (to represent punctuation).

GA: I got “This is a test message for Google Assistant.” Do you want to send it or change it?

Me: Cancel.

GA: OK, no problem.

If you have read my blog post on Google Assistant when outside of the car you know where the Google Assistant protobuf files are located, and the information they contain, so I will skip ahead to examining the file that represented this session.

The file header that reports where the protobuf file comes from is the same as before; the “opa” is seen in the red box. However, there is a huge difference with regards to the embedded audio data in this file. See Figure 1.

Figure 1
Figure 1.  Same header, different audio.

In the blue box there is a marker for Ogg, a container format that is used to encapsulate audio and video files. In the orange box is a marker for Opus, which is a lossy audio compression codec. It is designed for interactive speech and music transmission over the Internet and is considered to be high quality audio, which makes it prime to send Assistant audio across limited bandwidth connections.  Based on this experiment and data in the Oreo image I released a couple of months ago, I believe Google Assistant may be using Opus now instead of the LAME codec.  The takeaway here is to just be aware you may see either.

In the green box is the string “Google Speech using libopus.” Libopus is the method by which audio is encoded in Opus. Since this was clearly audio data, I treated it just like the embedded MP3 data I had previously seen in other Google Assistant protobuf files. I carved from the Ogg marker all the way down until I reached a series of 0xFF values just before a BNDL (see the previous Google Assistant posts about BNDL). I saved the file out with no extension and opened it with VLC Player. The following audio came out of my speakers:

“OK, no problem.”

This is the exact behavior I had see before in Google Assistant protobuf files: the file contained the audio of the last thing Google Assistant said to me, so this behavior was the same as before.

However, in this instance my request (to send a message) had not been passed to a different service (the Android Messages app) because I had indicated to Assistant that I did not want to send the message (my “Cancel” command). I continued search the file to see if the rest of my interaction with Google Assistant was present.

Figure 2 shows an area a short way past the embedded audio data. The area in the blue box should be familiar to those who read my previous Google Assistant. The hexadecimal string 0xBAF1C8F803 appears just before the first vocal input (red box) that appears in this protobuf file. The 8-byte string seen in the orange box, while not not exactly what I had seen before, had bytes that were the same (the leading 0x010C and trailing 0x040200). Either way, if you see this, get ready to see the text of some of the user’s vocal input.

Figure 2
Figure 2.  What is last is first.

So far, this pattern was exactly as I had seen before: what was last during my session with Google Assistant was first in the protobuf file. So I skipped a bit of data because I know the session data that followed dealt with the last part of session. If the pattern holds, that portion of the session will appear again towards the end of the protobuf file.

I navigated to the portion seen in Figure 3. Here I find a 16-byte string which I consider to be a footer for what I call vocal transactions. It marks the end of the data for my “Cancel” command; you can see the string in the blue box. Also in Figure 3 is the 8-byte string that I saw earlier (that acts as a marker for the vocal input) and the text of the vocal input that started the session (“Send a message to Josh Hickman”).

Figure 3
Figure 3.  The end of a transaction and the beginning of another.

Traveling a bit further finds the two things of interest. The first is data that indicates how the session was started (via pressing the button in the Google Quick Search Box – I see this throughout the files in which I invoked Assistant via the button), which is highlighted in the blue box in Figure 4. Figure 4 also has a timestamp in it (red box). The timestamp is a Unix Epoch timestamp that is stored little endian (0x0000016BFD619312). When decoded, the timestamp is 07/16/2019 at 17:42:38 PDT (-7:00), which can be seen in Figure 5. This is when I started the session.

Figure 4
Figure 4.  A timestamp and the session start mechanism.
Figure 5
Figure 5.  The decoded timestamp.

The next thing I find, just below the timestamp, is a transactional GUID. I believe this GUID is used by Google Assistant to keep vocal input paired with the feedback that the input generates; this helps keep a user’s interaction with Google Assistant conversational. See the red box in Figure 6.

Figure 6
Figure 6.  Transactional GUID.

The data in the red box in Figure 7 is interesting and I didn’t realize its significance until I was preparing slides for my presentation at DFRWS. The string 3298i2511e4458bd4fba3 is the Lookup Key associated with the (lone) contact on my test phone, “Josh Hickman;” this key appears in a few places. In the Contacts database (/data/data/com.android.providers.contacts/databases/contact2.db) the key appears in the contacts, view_contacts, view_data, and view_entities tables. It also appears in the Participants table in the Bugle database (/data/data/com.google.android.messages/databases/bugle.db), which is the database for the Android Messages app. See Figures 7, 8, & 9.

Figure 7
Figure 7.  The lookup key in the protobuf file.
Figure 8.PNG
Figure 8.  The participants table entry in the bugle.db.
Figure 9
Figure 9.  A second look at the lookup key in the bugle.db.

There are a few things seen in Figure 10. First is the transactional GUID that was previously seen in Figure 6 (blue box). Just below that is the vocal transaction footer (green box), the 8-byte string that marks vocal input (orange box), and the message I dictated to Google Assistant (red box). See Figure 10.

Figure 10.png
Figure 10.  There is a lot going on here.

Figure 11 shows the timestamp in the red box. The string, read little endian, decodes to 07/17/2019 at 17:42:43 PDT, 5 seconds past the first timestamp, which makes sense that I would have dictated the message after having made the request to Google Assistant. The decoded time is seen in Figure 12.

Figure 11
Figure 11.  Timestamp for the dictated message.
Figure 12.png
Figure 12.  The decoded timestamp.

Below there is the transactional GUID (again, previously seen in Figure 6) associated with the original vocal input in the session. Again, I believe this allows Google Assistant to know that this dictated message is associated with the original request (“Send a message to Josh Hickman”). This allows Assistant to be conversational with the user. See the red box in Figure 13.

Figure 13.png
Figure 13.  The same transactional GUID.

Scrolling through quite a bit of protobuf data finds the area seen in Figure 14. Here I found the vocal transaction footer (blue box), the 8-byte vocal input marker (orange box) and the vocal input “Cancel” in the red box.

Figure 14.png
Figure 14.  The last vocal input of the session.

Figure 15 shows the timestamp of the “Cancel;” it decodes to 07/17/2019 at 17:42:57 PDT (-7:00). See Figure 16 for the decoded timestamp.

Figure 15.png
Figure 15.  The “Cancel” timestamp.
Figure 16
Figure 16.  The decoded “Cancel” timestamp.

The final part of this file shows the original transactional GUID again (red box), which associates the “Cancel” with the original request. See Figure 17.

Figure 17
Figure 17.  The original transactional GUID…again.

After I looked at this file, I checked my messages on my phone and the message did not appear in the Android Messages app. Just to confirm, I pulled my bugle.db and the message was nowhere to be found. So, based on this, it is safe to say that if I change my mind after having dictated a message to Google Assistant the message will not show up in the database that holds messages. This isn’t surprising as Google Assistant never handed me off to Android Messages in order to transmit the message.

However, and this is the surprising part, the message DOES exist on the device in the protobuf file holding the Google Assistant session data. Granted, I had to go in and manually find the message and the associated timestamp, but it is there. The upside to the manual parsing is there is already some documentation on this file structure to help navigate to the relevant data. 🙂

I also completed this scenario by invoking Google Assistant verbally, and the results were the same. The message was still resident inside of the protobuf file even though it had not been saved to bugle.db.

Hitting the Cancel Button

Next, I tried the same scenario but instead of telling Google Assistant to cancel, I just hit the “Cancel” button in the Google Assistant interface. Some users may be in a hurry to cancel a message and may not want to wait for Assistant to give them an option to cancel, or they are interrupted and may need to cancel the message before sending it.

I ran this test in the Salt Lake City, UT airport, so the time zone was Mountain Daylight Time (MDT or -6:00). The conversation with Google Assistant went as so:

Me: Send a text message to Josh Hickman.

GA: Message to Josh Hickman using SMS. Sure. What’s the message?

Me: This is a test message that I will use to cancel prior to issuing the cancel command.

*I pressed the cancel button in the Google Assistant UI*

Since I’ve already covered the file structure and markers, I will skip those things and get to the relevant data. I will just say the structure and markers are all present.

Figure 18 shows the 8-byte marker indicating the text of the vocal input is coming (orange box) along with the text of the input itself (red box). The timestamp seen in Figure 19 is the correct timestamp based on my notes: 07/18/2019 at 9:25:37 MDT (-6:00).

Figure 18.png
Figure 18.  The request.
Figure 19.png
Figure 19.  The timestamp.
Figure 20
Figure 20.  The timestamp decoded.

Just as before the dictated text message request was further up in the file, which makes sense here because the last input I gave Assistant was the dictated text message. Also note that there are variants of the dictated message, each with their own designations (T, X, V, W, & Z). This is probably due to the fact that I was in a noisy airport terminal, and, at the time I dictated the message, there was an announcement going over the public address system. See Figure 21 for the message and its variants, Figure 22 for the timestamp, and Figure 23 for the decoded timestamp.

Figure 21
Figure 21.  The dicated message with variants.
Figure 22.png
Figure 22.  The timestamp.
Figure 23.png
Figure 23.  The decoded timestamp.

As I mentioned, I hit the “Cancel” button on the screen as soon as the message was dictated. I watched the message appear in the Google Assistant UI, but I did not give Assistant time to read the message back to me to make sure it had dictated the message correctly. I allowed no feedback whatsoever. Considering this, the nugget I found in Figure 24 was quite the surprise.

Figure 24
Figure 24.  The canceled message.

In the blue box you can see the message in a java wrapper, but the thing in the red box…well, see for yourself. I canceled the message by pressing the “Cancel” button, and there is a string “Canceled” just below the message. I tried this scenario again by just hitting the “Home” button (instead of the “Cancel” button in the Assistant UI), and I got the same result. The dictated message was present in the protobuf file, but this time the message did not appear in a java wrapper, The “Canceled” ASCII string was just below an empty wrapper. See Figure 25.

Figure 25
Figure 25.  Canceled.  Again.

So it would appear that an examiner may get some indication a session was canceled prior to Google Assistant getting a chance to either complete the action of sending a message or Google Assistant getting a “Cancel” command. Obviously, there are multiple scenarios in which a user could cancel a session with Google Assistant, but having “Canceled” in the protobuf data is definitely a good indicator. The drawback, though, is there is no indication how the transaction was canceled (e.g. by way of the “Cancel” button or hitting the home button).

An Actual Virtual Assistant Butt Dial

The next scenario I tested involved me simulating what I believe to be Google Assistant’s version of a butt-dial. What would happen if Google Assistant was accidentally invoked? By accidentally I mean by hitting the button in the Quick Search Box by accident, or by saying the hot word without intending to call on Google Assistant. Would Assistant record what the user said? Would it try to take any action even though there was probably no actionable items, or would it freeze and not do anything? Would there be any record of what the user said, or would Assistant realize what was going on, shut itself off, and not generate any protobuf data?

There were two tests here with the difference being in the way I invoked Assistant. One was by button and the other by hot word. Since the results were the same I will show just one set of screen shots, which are from the scenario in which I pressed the Google Assistant button in the Quick Search Bar (right side). I was in my hotel room at DFRWS, so the time zone is Pacific Daylight Time (-7:00) again. The scenario went as such:

*I pressed the button*

Me: Hi, my name is Josh Hickman and I’m here this week at the Digital Forensic Research Workshop. I was here originally…

*Google Assistant interrupts*

GA: You’d like me to call you ‘Josh Hickman and I’m here this week at the digital forensic research Workshop.’ Is that right?

*I ignore the feedback from Google Assistant and continue.*

Me: Anyway, I was here to give a presentation and the presentation went fairly well considerIng the fact that it was my first time here…

*Google Assistant interrupts again*

GA: I found these results.

*Google Assistant presents some search results for addressing anxiety over public speaking…relevant, hilarous, and slightly creepy.*

As before, I will skip file structure and get straight to the point.

The vocal input is in this file. Figure 26 shows the vocal input and a variant of what I said (“I’m” versus “I am”) in the purple boxes. It also shows the 5-byte marker for the first vocal input in a protobuf file (blue box) along with the 8-byte marker that indicates vocal input is forthcoming (orange box).

Figure 26.png
Figure 26.  The usual suspects.

Just below the area in Figure 26 is the timestamp of the session. The time decodes to 07/17/2019 at 11:51:06 PDT (-7:00). See Figure 27.

Figure 27.png
Figure 27.  Timestamp.
Figure 28
Figure 28.  Decoded timestamp.

Figure 29 shows my vocal input wrapped in the java wrapper.

Figure 29.png
Figure 29.  My initial vocal input, wrapped.

Interestingly enough, I did not find any data in this file related to the second bit of input Google Assistant received, the fact that Google Assistant performed a search, or what search terms it used (or thought I gave it). I even went out to other protobuf files in the app_session folder to see if a new file was generated. Nothing.


This exercise shows there is yet one more place to check for messages in Android.  Traditionally, we have always thought to look for messages in database files.  What if the user composed a message using Google Assistant?  If the user actually sends the message, the traditional way of thinking still applies.  But, what if the user changes their mind prior to actually sending those dictated messages?  Are those messages saved to a draft folder or some other temporary location in Messages?  No, it is not.  In fact, it is not stored any other location that I can find other than the Google Assistant protobuf files (if someone can find them please let me know).     The good news is if a message is dictated using Assistant and the user cancels the message, it is possible to recover the message that was dicated but never sent.  This could give further insight into the intent of a user and help recover even more messges.  It also gives a better picture of how a user actually interacted with their device.

The Google Assistant protobuf files are continuing to suprise me in regards to how much data they contain.  At this year’s I/O conference Google annouced speed improvements to Assistant along with their intention to push more of the natural language processing and machine learning functions on to the devices instead of having everything done server-side.  This could be advantageous in that more artifacts could be left behind by Assistant, which would give a more wholelistic view of device usage.

Two Snaps and a Twist – An In-Depth (and Updated) Look at Snapchat on Android


There is an update to this post. It can be found after the ‘Conclusion’ section.

I was recently tasked with examining a two-year old Android-based phone which required an in-depth look at Snapchat. One of the things that I found most striking (and frustrating) during this examination was the lack of a modern, in-depth analysis of the Android version of the application beyond the tcspahn.db file, which, by the way, doesn’t exist anymore, and the /cache folder, which isn’t really used anymore (as far as I can tell). I found a bunch of things that discussed decoding encrypted media files, but this information was years old (Snapchat 5.x). I own the second edition of Learning Android Forensics by Skulkin, Tyndall, and Tamma, and while this book is great, I couldn’t find where they listed the version of Snapchat they examined or the version of Android they were using; what I found during my research for this post did not really match what was written in their book. A lot of things have changed.

Googling didn’t seem to help either; I just kept unearthing the older research. The closest I got was a great blog post by John Walther that examined Snapchat on Android Marshmallow. Some of John’s post lined up with what I was seeing, while other parts did not.


Snapchat averages 190 million users daily, which is just under half of the U.S. population, and those 190 million people send three billion snaps (pictures/videos) daily. Personally, I have the app installed on my phone, but it rarely sees any usage. Most of the time I use it on my kid, who likes the filters that alter his voice or requires that he stick out his tongue. He is particularly fond of the recent hot dog filter.

One of the appealing things about Snapchat is that direct messages (DMs) and snaps disappear after a they’re opened. While the app can certainly be used to send silly, ephemeral pictures or videos, some people find a way to twist the app for their own nefarious purposes.

There has been plenty written in the past about how some traces of activity are actually recoverable, but, again, nothing recent. I was surprised to find that there was actually more activity-related data left behind than I thought.

Before we get started just a few things to note (as usual). First, my test data was generated using a Pixel 3 running Android 9.0 (Pie) with a patch level of February 2019. Second, the version of Snapchat I tested is, which was the most current version as of 05/22/2019. Third, while the phone was not rooted, it did have TWRP, version 3.3.0-0, installed. Extracting the data was straight forward as I had the Android SDK Platform tools installed on my laptop. I booted into TWRP and then ran the following from the command line:

adb pull /data/data/com.snapchat.android

That’s it. The pull command dropped the entire folder in the same path as where the platform tools resided.

As part of this testing, I extracted the com.snapchat.android folder five different times over a period of 8 days as I wanted to see what stuck around versus what did not. I believe it is also important to understand the volatility of the data that is provided in this app. I think understanding the volatility will help investigators in the field and examiners understand exactly how much time, if any, they have before the data they are seeking is no longer available.

I will add that I tested two tools to see what they could extract: Axiom (version 3.0) and Cellebrite (UFED 4PC 7.18 and Physical Analyzer 7.19). Both tools failed to extract (parsing not included) any Snapchat data. I am not sure if this is a symptom of these tools (I hope not) or my phone. Regardless, both tools extracted nothing.


So, what’s changed? Quite a bit as far as I can tell. The storage location of where some of the data that we typically seek has changed. There are enough changes that I will not cover every single file/folder in Snapchat. I will just focus on those things that I think may be important for examiners and/or investigators.

One thing has not changed: the timestamp format. Unless otherwise noted, all timestamps discussed are in Unix Epoch.

The first thing I noticed is that the root level has some new additions (along with some familiar faces). The folders that appear to be new are “app_textures”, “lib”, and “no_backup.” See Figure 1.

Figure 1. Root level of the com.snapchat.android folder.

The first folder that may be of interest is one that has been of interest to forensicators and investigators since the beginning: “databases.” The first database of interest is “main.db.” This database replaces tcspahn.db as it now contains a majority of user data (again, tcspahn.db does not exist anymore). There is quite a bit in here, but I will highlight a few tables. The first table is “Feed.” See Figure 2.

Figure 2. The Feed.

This table contains the last action taken in the app. Specifically, the parties involved in that action (seen in Figure 2), what the action was, and when the action was taken (Figure 3). In Figure 4 you can even see which party did what. The column “lastReadTimestamp” is the absolute last action, and the column “lastReader” show who did that action. In this instance, I had sent a chat message from Fake Account 1 (“thisisdfir”) to Fake Account 2 (“hickdawg957”) and had taken a screenshot of the conversation using Fake Account 1. Fake Account 2 then opened the message.

Enter aFigure 3. Last action. caption

Figure 4. Who did what?
The second table is “Friend.” This table contains anyone who I may be my friend. The table contains the other party’s username, user ID, display name, the date/time I added that person as a friend (column “addedTimestamp”), and the date/time the other person added me as a friend (column “reverseAddedTimestamp”). Also seen is any emojis that may be assigned to my friends. See Figures 5, 6, and 7.

Figure 5. Username, User ID, & Display Name.
Figure 6. Friendmojis (Emojis added to my Friends.

Figure 7. Timestamps for when I added friends and when they added me.

Note that the timestamps are for when I originally added the friend/the friend added me. The timestamps here translate back to dates in November of 2018, which is when I originally created the accounts during the creation of my Android Nougat image.

One additional note here. Since everyone is friends with the “Team Snapchat” account, the value for that entry in the “addedTimestamp” column is a good indicator of when the account you’re examining was created.

The next table is a biggie: Messages. I will say that I had some difficulty actually capturing data in this table. The first two attempts involved sending a few messages back and forth, letting the phone sit for a 10 or so minutes, and then extracting the data. In each of those instances, absolutely NO data was left behind in this table.

In order to actually capture the data, I had to leave the phone plugged in to the laptop, send some messages, screenshot the conversation quickly, and then boot into TWRP, which all happened in under two minutes time. If Snapchat is deleting the messages from this table that quickly, they will be extremely hard to capture in the future.

Figure 8 is a screenshot of my conversation (all occurred on 05/30/2019) taken with Fake Account 1 (on the test phone) and Figure 9 shows the table entries. The messages on 05/30/2019 start on Row 6.

Figure 8. A screenshot of the conversation.

Figure 9. Table entries of the conversation.

The columns “timestamp” and “seenTimestamp” are self-explanatory. The column “senderId” is the “id” column from the Friends table. Fake Account 1 (thisisdfir) is senderId 2 and Fake Account 2 (hickdawg957) is senderId 1. The column “feedRowId” tells you who the conversation participants are (beyond the sender). The values link back to the “id” column in the Feed table previously discussed. In this instance, the participants in the conversation are hickdawg957 and thisisdifr.

In case you missed it, Figure 8 actually has two saved messages between these two accounts from December of 2018. Information about those saved messages appear in Rows 1 and 2 in the table. Again, these are relics from previous activity and were not generated during this testing. This is an interesting find as I had completely wiped and reinstalled Android multiple times on this device since the those messages were sent, which leads me to speculate these messages may be saved server-side.

In Figure 10, the “type” column is seen. This column shows the type of message was transmitted. There are three “snap” entries here, but, based on the timestamps, these are not snaps that I sent or received during this testing.

Figure 10. The “types” of messages.
After the “type” column there is a lot of NULL values in a bunch of columns, but you eventually get to the message content, which is seen in Figure 11. Message content is stored as blob data. You’ll also notice there is a column “savedStates.” I am not sure exactly what the entries in the cells are referring to, but they line up with the saved messages.

Figure 11. Message (blob) content.

In Figure 12, I bring up one of the messages that I recently sent.

Figure 12. A sample message.

The next table is “Snaps.” This table is volatile, to say the least. The first data extraction I performed was on 05/22/2019 around 19:00. However, I took multiple pictures and sent multiple snaps on 05/21/2019 around lunch time and the following morning on 05/22/2019. Overall, I sent eight snaps (pictures only) during this time. Figure 13. Shows what I captured during my first data extraction.

Figure 13. I appear to be messing some snaps.
Of the eight snaps that I sent, only six appear in the table. The first two entries in the table pre-date when I started the testing (on 05/21/2019), so those entries are out (they came from Team Snapchat). The first timestamp is from the first snap I sent on 05/22/2019 at 08:24. The two snaps from 05/21/2019 are not here. So, within 24 hours, the data about those snaps had been purged.

On 05/25/2019 I conducted another data extraction after having received a snap and sending two snaps. Figure 14 shows the results.

Figure 14. A day’s worth of snaps.
The entries seen in Figure 13 (save the first two) are gone, but there are two entries there for the snaps I sent. However, there is no entry for the snap I received. I checked all of the tables and there was nothing. I received the snap at 15:18 that day, and performed the extraction at 15:51. Now, I don’t know for sure that a received snap would have been logged. I am sure, however, that it was not there. There may be more testing needed here.

Figure 15 shows the next table, “SendToLastSnapRecipients.” This table shows the user ID of the person I last sent a snap to in the “key” column, and the time at which I sent said snap.

Figure 15. The last snap recipient.


During the entire testing period I took a total of 13 pictures. Of those 13, I saved 10 of them to “Memories.” Memories is Snapchat’s internal gallery, separate from the phone’s Photos app. After taking a picture and creating an overlay (if desired), you can choose to save the picture, which places it in Memories. If you were to decide to save the picture to your Photos app, Snapchat will allow you to export a copy of the picture (or video).

And here is a plus for examiners/investigators: items placed in Memories are stored server-side. I tested this by signing into Fake Account 1 from an iOS device, and guess what…all of the items I placed in Memories on the Pixel 3 appeared on the iOS device.

Memories can be accessed by swiping up from the bottom of the screen. Figure 16 shows the Snapchat screen after having taken a photo but before snapping (sending) it. Pressing the area in the blue box (bottom left) saves the photo (or video) to Memories. The area in the red box (upper right) are the overlay tools.

Figure 16. The Snapchat screen.

Figure 17 shows the pictures I have in my Memories. Notice that there are only 9 pictures (not 10). More on that in a moment.

Figure 17. My memories. It looks like I am short one picture.

The database memories.db stores relevant information about files that have been saved to Memories. The first table of interest is “memories_entry.” This table contains an “id,” the “snap_id,” and the date the snap was created. There are two columns regarding the time: “created_time” and “latest_created_time.” In Figure 18 there is a few seconds difference between the values in some cells in the two columns, but there are also a few that are the same value. In the cells where there are differences, the differences are negligible.

There is also a column titled “is_private” (seen in Figure 19). This column refers to the My Eyes Only (MEO) feature, which I will discuss shortly. For now, just know that the value of 1 indicates “yes.”

Figure 18. Memories entries.

Figure 19. My Eyes Only status.


I have been seeing a lot of listserv inquires as of late regarding MEO. Cellebrite recently added support for MEO file recovery in Android as of Physical Analyzer 7.19 (iOS to follow), and, after digging around in the memories database, I can see why this would be an issue.

MEO allows a user to protect pictures or videos with a passcode; this passcode is separate from the user’s password for their Snapchat account. A user can opt to use a 4-digit passcode, or a custom alphanumeric passcode. Once a user indicates they want to place a media file in MEO, that file is moved out of the Memories area into MEO (it isn’t copied to MEO).

MEO is basically a private part of Memories. So, just like everything else in Memories, MEO items are also stored server-side. I confirmed this when I signed in to Fake Account 1 from the iOS device; the picture I saved to MEO on the Pixel 3 appeared in MEO on the iOS device. The passcode was the same, too. Snapchat says if a user forgets the passcode to MEO, they cannot help recover it. I’m not sure how true that is, but who knows.

If you recall, I placed 10 pictures in Memories, but Figure 17 only showed 9 pictures. That is because I moved one picture to MEO. Figure 20 shows my MEO gallery.

Figure 20. MEO gallery.

In the memories database, the table “memories_meo_confidential” contains entries about files that have been placed in MEO. See Figure 21.

Figure 21. MEO table in the memories database.

This table contains a “user_id,” the hashed passcode, a “master_key,” and the initialization vector (“iv”). The “master_key” and “initialization vector” are both stored in base64. And, the passcode….well, it has been hashed using bcrypt (ugh). I will add that Cellebrite reports Physical Analyzer 7.19 does have support for accessing MEO files, and, while I did have access to 7.19, I was not able to tell if it was able to access my MEO file since it failed to extract any Snapchat data.

The “user_id” is interesting: “dummy.” I have no idea what that is referring to, and I could not find it anywhere else in the data I extracted.

The next table is “memories_media.” This table. Does have a few tidbits of interesting data: another “id,” the size of the file (“size”), and what type of file (“format”). Since all of my Memories are pictures, all of the cells show “image_jpeg.” See Figures 22 and 23.

Figure 22. “memories_media.”

Figure 23. “memories_media,” part 2.

The next table is “memories_snap.” This table has a lot of information in about my pictures, and brings together data from the other tables in this database. Figure 24 shows a column “media_id,” which corresponds to the “id” in the “memories_media” table discussed earlier. There is also a “creation_time” and “time_zone_id” column. See Figure 24.

Figure 24. id, media_id, creation_time, and time zone.

Figure 25 shows the width and height of the pictures. Also note the column “duration.” The value is 3.0 for each picture. I would be willing to be that number could be higher or lower if the media were videos.

Figure 25 also shows the “memories_entry_id,” which corresponds to the “id” column in the “memories_entry” table. There is also a column for “has_location.” Each of the pictures I placed in Memories has location data associated with it (more on that in a moment).

Figure 25. Picture size, another id, and a location indicator.

Figure 26 is interesting as I have not been able to find the values in the “external_id” or “copy_from_snap_id” columns anywhere.

Figure 26. No clue here.

The data seen in Figure 27 could be very helpful in situations where an examiner/investigator thinks there may be multiple devices in play. The column “snap_create_user_agent” contains information on what version of Snapchat created the the snap, along with the Android version and, in my case, my phone model.

Figure 27. Very helpful.

The column “snap_capture_time” is the time I originally took the picture and not the time I sent the snap.

Figure 28 shows information about the thumbnail associated with each entry.

Figure 28. Thumbnail information.

Figure 29 is just like Figure 27 in its level of value. It contains latitude and longitude of the device when the picture was taken. I plotted each of these entries and I will say that the coordinates are accurate +/- 10 feet. I know the GPS capabilities of every device is different, so just be aware that your mileage may vary.

Figure 29. GPS coordinates!!

Figure 29 also has the column “overlay_size.” This is a good indication if a user has placed an overlay in the picture/video. Overlays are things that are placed in a photo/video after it has been captured. Figure 30 shows an example of an overlay (in the red box). The overlay here is caption text.

Figure 30. An overlay example.

If the value in the overlay_size column is NULL that is a good indication that no overlay was created.

Figure 31 shows the “media_key” and “media_iv,” both of which are in base64. Figure 32 shows the “encrypted_media_key” and “encrypted_media_iv” values. As you can see there is only one entry that has values for these columns; that entry is the picture I placed in MEO.

Figure 31. More base64.

Figure 32. Encrypted stuff.

The next table that may be of interest is “memories_remote_operation.” This shows all of the activity taken within Memories. In the “operation” column, you can see where I added the 10 pictures to Memories (ADD_SNAP_ENTRY_OPERATION). The 11th entry, “UPDATE_PRIVATE_ENTRY_OPERATION,” is where I moved a picture into MEO. See Figure 33.

Figure 33. Remote operations.

The column “serialized_operation” stores information about the operation that was performed. The data appears to be stored in JSON format. The cell contains a lot of the same data that was seen in the “memories_snap” table. I won’t expand it here, but DB Browser for SQLite does a good job of presenting it.

Figure 34 shows a better view of the column plus the “created_timestamp” column. This is the time of when the operation in the entry was performed.

Figure 34. JSON and a timestamp for the operation.

Figure 35 contains the “target_entry” column. The values in these columns refer to the “id”column in the “memories_entry” table.

Figure 35. Operation targets.

To understand the next database, journal, I first have to explain some additional file structure of the com.snapchat.android folder. If you recall all the way back to Figure 1, there was a folder labeled “files.” Entering that folder reveals the times seen in Figure 36. Figure 37 shows the contents of the “file_manager” folder.

Figure 36. “Files” structure.

Figure 37. file_manager.

The first folder of interest here is “media_package_thumb,” the contents of which can be seen in Figure 38.

Figure 38. Thumbnails?

Examining the first file here in hex finds a familiar header: 0xFF D8 FF E0…yoya. These things are actually JPEGs. So, I opened a command line in the folder, typed ren *.* *.jpg and BAM: pictures! See Figure 39.

Figure 39. Pictures!

Notice there are a few duplications here. However, there are some pictures here that were not saved to memories and were not saved anywhere else. As an example, see the picture in Figure 40.

Figure 40. A non-saved, non-screenshot picture.
Figure 40 is a picture of the front of my employer’s building. For documentation purposes, I put a text overlay in the picture with the date/time I took it (to accompany my notes). I then snapped this picture to Fake Account 2, but did not save it to Memories, did not save it to my Photos app, and did not screenshot it. However, here it is, complete with the overlay. Now, while this isn’t the original picture (it is a thumbnail) it can still be very useful; one would need to examine the “snap” table in the main database to see if there was any activity around the MAC times for the thumbnail.

The next folder of interest is the “memories_media” folder. See Figure 41.

Figure 41. Hmm…

There are 10 items here. These are also JPEGs. I performed the same operation here as I did in the “media_package_thumb” folder and got the results seen in Figure 42.

Figure 42. My Memories, sans overlays.

These are the photographs I placed in Memories, but the caption overlays are missing. The picture that is MEO is also here (the file staring with F5FC6BB…). Additionally, these are high resolution pictures.

You may be asking yourself “What happened to the caption overlays?” I’m glad you asked. They are stored in the “memories_overlay” folder. See Figure 43.

Figure 43. My caption overlays.

Just like the previous two folders, these are actually JPEGs. I performed the rename function, and got the results seen in Figure 44. Figure 45 shows the overlay previously seen in Figure 30.

Figure 44. Overlays.

Figure 45. The Megaman overlay from Figure 30.

The folder “memories_thumbnail” is the same as the others, except it contains just the files in Memories (with the overlays). For brevity’s sake, I will just say the methodology to get the pictures to render is the same as before. Just be aware that while I just have pictures in my Memories, a user could put videos in there, too, so you could have a mixture of media. If you do a mass-renaming, and a file does not render, the file extension is probably wrong, so adjust the file extension(s) accordingly.

Now that we have discussed those file folders, let’s get back to the journal database. This database keeps track of everything in the “file_manager” directory, including those things we just discussed. Figure 46 shows the top level of the database’s entries.

Figure 46. First entries in the journal database.

If I filter the “key” column using the term “package” from the “media_package_thumb” folder (the “media_package_thumb.0” files) I get the results seen in Figure 47.

Figure 47. Filtered results.

The values in the “key” column are the file names for the 21 files seen in Figure 38. The values seen in the “last_update_time” column are the timestamps for when I took the pictures. This is a method by which examiners/investigators could potentially recover snaps that have been deleted.


As it turns out, there are a few more, non-database artifacts left behind which are located in the “shared_prefs” folder seen in Figure 1. The contents can be seen in Figure 48.

Figure 48. shared_prefs contents.

The first file is identity_persistent_store.xml seen in Figure 49. The file contains the timestamp for when Snapchat was installed on the device (INSTALL_ON_DEVICE_TIMESTAMP), when the first logon occurred on the device (FIRST_LOGGED_IN_ON_DEVICE_TIMESTAMP), and the last user to logon to the device (LAST_LOGGED_IN_USERNAME).

Figure 49. identity_persistent_store.xml.

Figure 50. shows the file LoginSignupStore.xml. it contains the username that is logged in.

Figure 50. Who is logged in?

The file user_session_shared_pref.xml has quite a bit of account data in it, and is seen in Figure 51. For starters, it contains the display name (key_display_name), the username (key_username), and the phone number associated with the account (key_phone).

The value “key_created_timestamp” is notable. This time stamp converts to November 29, 2018 at 15:13:34 (EST). Based on my notes from my Nougat image, this was around the time I established Fake Account 1, which was used in the creation of the Nougat image. This might be a good indicator of when the account was established, although, you could always get that data from serving Snapchat with legal process.

Rounding it out is the “key_user_id” (seen in the Friends table of the main database) and the email associated with the account (key_email).

Figure 51. user_session_shared_pref.xml


Snapchat’s reputation proceeds it very well. I have been in a few situations where examiners/investigators automatically threw up their hands and gave up after having been told that potential evidence was generated/contained in Snapchat. They wouldn’t even try. I will say that while I always have (and will) try to examine anything regardless of what the general concensus is, I did share a bit of others’ skepticism about the ability to recover much data from Snapchat. However, this exercise has shown me that there is plenty of useful data left behind by Snapchat that can give a good look into its usage.


Alexis Brignoni over at Initialization Vectors noticed that I failed to address something in this post. First, thanks to him for reading and contacting me. 🙂 Second, he noticed that I did not address Cellebrite Physical Analyzer’s (v 7.19) and Axiom’s (v 3.0) ability to parse my test Snapchat data (I addressed the extraction portion only).

We both ran the test data against both tools and found both failed to parse any of the databases. Testing found that while Cellebrite found the pictures I describe in this post, it did not apply the correct MAC times to them (from the journal.db). Axiom failed to parse the databases and failed to identify any of the pictures.

This is not in any way shape or form a knock on or an attempt to single out these two tools; these are just the tools to which I happen to have access. These tools work, and I use them regularly. The vendors do a great job keeping up with the latest developments in both the apps and the operating systems. Sometimes, though, app developers will make a hard turn all of a sudden, and it does take time for the vendors to update their tools. Doing so requires R&D and quality control via testing, which can take a while depending on the complexity of the update.

However, this exercise does bring to light an important lesson in our discipline, one that bears repeating: test and know the limitations of your tools. Knowing the limitations allows you to know when you may be missing data/getting errant readings. Being able to compensate for any shortcomings and manually examine the data is a necessary skillset in our discipline.

Thank you Alexis for the catach and assist!