Demo notebook for analyzing application data

Introduction

Application data refers to the information about which apps are open at a certain time. These data can reveal important information about people’s circadian rhythm, social patterns, and activity. Application data is an event data; this means it cannot be sampled at a regular frequency. Instead, we just have information about the events that occured. There are two main issues with application data (1) missing data detection, and (2) privacy concerns.

Regarding missing data detection, we may never know if all events were detected and reported. Unfortunately there is little we can do. Nevertheless, we can take into account some factors that may interfere with the correct detection of all events (e.g. when the phone’s battery is depleated). Therefore, to correctly process application data, we need to consider other information like the battery status of the phone. Regarding the privacy concerns, application names can reveal too much about a subject, for example, an uncommon app use may help identify a subject. Consequently, we try anonimizing the data by grouping the apps.

To address both of these issues, niimpy includes the function extract_features_app to clean, downsample, and extract features from application data while taking into account factors like the battery level and naming groups. In addition, niimpy provides a map with some of the common apps for pseudo-anonymization. This function employs other functions to extract the following features:

  • app_count: number of times an app group has been used

  • app_duration: how long an app group has been used

The app module has one internal function that help classify the apps into groups.

In the following, we will analyze screen data provided by niimpy as an example to illustrate the use of application data.

2. Read data

Let’s start by reading the example data provided in niimpy. These data have already been shaped in a format that meets the requirements of the data schema. Let’s start by importing the needed modules. Firstly we will import the niimpy package and then we will import the module we will use (application) and give it a short name for use convenience.

[1]:
import niimpy
from niimpy import config
import niimpy.preprocessing.application as app
import pandas as pd
import warnings
warnings.filterwarnings("ignore")

Now let’s read the example data provided in niimpy. The example data is in csv format, so we need to use the read_csv function. When reading the data, we can specify the timezone where the data was collected. This will help us handle daylight saving times easier. We can specify the timezone with the argument tz. The output is a dataframe. We can also check the number of rows and columns in the dataframe.

[2]:
data = niimpy.read_csv(config.SINGLEUSER_AWARE_APP_PATH, tz='Europe/Helsinki')
data.shape
[2]:
(132, 6)

The data was succesfully read. We can see that there are 132 datapoints with 6 columns in the dataset. However, we do not know yet what the data really looks like, so let’s have a quick look:

[3]:
data.head()
[3]:
user device time application_name package_name datetime
2019-08-05 14:02:51.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs 1.565003e+09 Android System android 2019-08-05 14:02:51.009999872+03:00
2019-08-05 14:02:58.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs 1.565003e+09 Android System android 2019-08-05 14:02:58.009999872+03:00
2019-08-05 14:03:17.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs 1.565003e+09 Google Play Music com.google.android.music 2019-08-05 14:03:17.009999872+03:00
2019-08-05 14:02:55.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs 1.565003e+09 Google Play Music com.google.android.music 2019-08-05 14:02:55.009999872+03:00
2019-08-05 14:03:31.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs 1.565003e+09 Gmail com.google.android.gm 2019-08-05 14:03:31.009999872+03:00

By exploring the head of the dataframe we can form an idea of its entirety. From the data, we can see that:

  • rows are observations, indexed by timestamps, i.e. each row represents that an app has been prompted to the smartphone screen

  • columns are characteristics for each observation, for example, the user whose data we are analyzing

  • there is one main column: application_name, which stores the Android name for the application.

A few words on missing data

Missing data for application is difficult to detect. Firstly, this sensor is triggered by events (i.e. not sampled at a fixed frequency). Secondly, different phones, OS, and settings change how easy it is to detect apps. Thirdly, events not related to the application sensor may affect its behavior, e.g. battery running out. Unfortunately, we can only correct missing data for events such as the screen turning off by using data from the screen sensor and the battery level. These can be taken into account in niimpy if we provide the screen and battery data. We will see some examples below.

A few words on grouping the apps

As previously mentioned, the application name may reveal too much about a subject and privacy problems may arise. A possible solution to this problem is to classify the apps into more generic groups. For example, apps like WhatsApp, Signal, Telegram, etc. are commonly used for texting, so we can group them under the label texting. niimpy provides a default map, but this should be adapted to the characteristics of the sample, since apps are available depending on countries and populations.

A few words on the role of the battery and screen

As mentioned before, sometimes the screen may be OFF and these events will not be caught by the application data sensor. For example, we can open an app and let it remain open until the phone screen turns off automatically. Another example is when the battery is depleated and the phone is shut down automatically. Having this information is crucial for correctly computing how long a subject used each app group. niimpy’s screen module is adapted to take into account both, the screen and battery data. For this example, we have both, so let’s load the screen and battery data.

[4]:
bat_data = niimpy.read_csv(config.MULTIUSER_AWARE_BATTERY_PATH, tz='Europe/Helsinki')
screen_data = niimpy.read_csv(config.MULTIUSER_AWARE_SCREEN_PATH, tz='Europe/Helsinki')
[5]:
bat_data.head()
[5]:
user device time battery_level battery_status battery_health battery_adaptor datetime
2020-01-09 02:20:02.924999936+02:00 jd9INuQ5BBlW 3p83yASkOb_B 1.578529e+09 74 3 2 0 2020-01-09 02:20:02.924999936+02:00
2020-01-09 02:21:30.405999872+02:00 jd9INuQ5BBlW 3p83yASkOb_B 1.578529e+09 73 3 2 0 2020-01-09 02:21:30.405999872+02:00
2020-01-09 02:24:12.805999872+02:00 jd9INuQ5BBlW 3p83yASkOb_B 1.578529e+09 72 3 2 0 2020-01-09 02:24:12.805999872+02:00
2020-01-09 02:35:38.561000192+02:00 jd9INuQ5BBlW 3p83yASkOb_B 1.578530e+09 72 2 2 0 2020-01-09 02:35:38.561000192+02:00
2020-01-09 02:35:38.953000192+02:00 jd9INuQ5BBlW 3p83yASkOb_B 1.578530e+09 72 2 2 2 2020-01-09 02:35:38.953000192+02:00

The dataframe looks fine. In this case, we are interested in the battery_status information. This is standard information provided by Android. However, if the dataframe stores this information in a column with a different name, we can use the argument battery_column_name and input our custom battery column name (again, we will have an example below).

[6]:
screen_data.head()
[6]:
user device time screen_status datetime
2020-01-09 02:06:41.573999872+02:00 jd9INuQ5BBlW OWd1Uau8POix 1.578528e+09 0 2020-01-09 02:06:41.573999872+02:00
2020-01-09 02:09:29.152000+02:00 jd9INuQ5BBlW OWd1Uau8POix 1.578529e+09 1 2020-01-09 02:09:29.152000+02:00
2020-01-09 02:09:32.790999808+02:00 jd9INuQ5BBlW OWd1Uau8POix 1.578529e+09 3 2020-01-09 02:09:32.790999808+02:00
2020-01-09 02:11:41.996000+02:00 jd9INuQ5BBlW OWd1Uau8POix 1.578529e+09 0 2020-01-09 02:11:41.996000+02:00
2020-01-09 02:16:19.010999808+02:00 jd9INuQ5BBlW OWd1Uau8POix 1.578529e+09 1 2020-01-09 02:16:19.010999808+02:00

This dataframe looks fine too. In this case, we are interested in the screen_status information, which is also standardized values provided by Android. The column does not need to be name “screen_status” as we can pass the name later on. We will see an example later.

* TIP! Data format requirements (or what should our data look like)

Data can take other shapes and formats. However, the niimpy data scheme requires it to be in a certain shape. This means the application dataframe needs to have at least the following characteristics: 1. One row per app prompt. Each row should store information about one app prompt only 2. Each row’s index should be a timestamp 3. There should be at least three columns: - index: date and time when the event happened (timestamp) - user: stores the user name whose data is analyzed. Each user should have a unique name or hash (i.e. one hash for each unique user) - application_name: stores the Android application name 4. Columns additional to those listed in item 3 are allowed 5. The names of the columns do not need to be exactly “user”, and “application_name” as we can pass our own names in an argument (to be explained later).

Below is an example of a dataframe that complies with these minimum requirements

[7]:
example_dataschema = data[['user','application_name']]
example_dataschema.head(3)
[7]:
user application_name
2019-08-05 14:02:51.009999872+03:00 iGyXetHE3S8u Android System
2019-08-05 14:02:58.009999872+03:00 iGyXetHE3S8u Android System
2019-08-05 14:03:17.009999872+03:00 iGyXetHE3S8u Google Play Music

Similarly, if we employ screen and battery data, we need to fulfill minimum data scheme requirements. We will briefly show examples of these dataframes that comply with the minimum requirements.

[8]:
example_screen_dataschema = screen_data[['user','screen_status']]
example_screen_dataschema.head(3)
[8]:
user screen_status
2020-01-09 02:06:41.573999872+02:00 jd9INuQ5BBlW 0
2020-01-09 02:09:29.152000+02:00 jd9INuQ5BBlW 1
2020-01-09 02:09:32.790999808+02:00 jd9INuQ5BBlW 3
[9]:
example_battery_dataschema = bat_data[['user','battery_status']]
example_battery_dataschema.head(3)
[9]:
user battery_status
2020-01-09 02:20:02.924999936+02:00 jd9INuQ5BBlW 3
2020-01-09 02:21:30.405999872+02:00 jd9INuQ5BBlW 3
2020-01-09 02:24:12.805999872+02:00 jd9INuQ5BBlW 3

4. Extracting features

There are two ways to extract features. We could use each function separately or we could use niimpy’s ready-made wrapper. Both ways will require us to specify arguments to pass to the functions/wrapper in order to customize the way the functions work. These arguments are specified in dictionaries. Let’s first understand how to extract features using stand-alone functions.

We can use niimpy’s functions to compute communication features. Each function will require two inputs: - (mandatory) dataframe that must comply with the minimum requirements (see ‘* TIP! Data requirements above) - (optional) an argument dictionary for stand-alone functions

4.1.1 The argument dictionary for stand-alone functions (or how we specify the way a function works)

In this dictionary, we can input two main features to customize the way a stand-alone function works: - the name of the columns to be preprocessed: Since the dataframe may have different columns, we need to specify which column has the data we would like to be preprocessed. To do so, we can simply pass the name of the column to the argument app_column_name.

  • the way we resample: resampling options are specified in niimpy as a dictionary. niimpy’s resampling and aggregating relies on pandas.DataFrame.resample, so mastering the use of this pandas function will help us greatly in niimpy’s preprocessing. Please familiarize yourself with the pandas resample function before continuing. Briefly, to use the pandas.DataFrame.resample function, we need a rule. This rule states the intervals we would like to use to resample our data (e.g., 15-seconds, 30-minutes, 1-hour). Neverthless, we can input more details into the function to specify the exact sampling we would like. For example, we could use the close argument if we would like to specify which side of the interval is closed, or we could use the offset argument if we would like to start our binning with an offset, etc. There are plenty of options to use this command, so we strongly recommend having pandas.DataFrame.resample documentation at hand. All features for the pandas.DataFrame.resample will be specified in a dictionary where keys are the arguments’ names for the pandas.DataFrame.resample function, and the dictionary’s values are the values for each of these selected arguments. This dictionary will be passed as a value to the key resample_args in niimpy.

Let’s see some basic examples of these dictionaries:

[10]:
config1:{"app_column_name":"application_name","resample_args":{"rule":"1D"}}
config2:{"app_column_name":"other_name", "screen_column_name":"screen_name", "resample_args":{"rule":"45T","origin":"end"}}

Here, we have two basic feature dictionaries.

  • config1 will be used to analyze the data stored in the column application_name in our dataframe. The data will be binned in one day periods

  • config2 will be used to analyze the data stored in the column other_name in our dataframe. In addition, we will provide some screen data in the column “screen_name”. The data will be binned in 45-minutes bins, but the binning will start from the last timestamp in the dataframe.

Default values: if no arguments are passed, niimpy’s default values are “application_name” for the app_column_name, “screen_status” for the screen_column_name, and “battery_status” for the battery_column_name. We will also use the default 30-min aggregation bins.

4.1.2 Using the functions

Now that we understand how the functions are customized, it is time we compute our first application feature. Suppose that we are interested in extracting the number of times each app group has been used within 1-minutes bins. We will need niimpy’s app_count function, the data, and we will also need to create a dictionary to customize our function. Let’s create the dictionary first

[11]:
config={"app_column_name":"application_name","resample_args":{"rule":"1T"}}

Now let’s use the function to preprocess the data.

[12]:
my_app_count = app.app_count(data, bat_data, screen_data, config)
my_app_count.head()
[12]:
user device app_group count
datetime
2019-08-05 14:02:00+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 28
2019-08-05 14:03:00+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 58
2019-08-05 14:02:00+03:00 iGyXetHE3S8u Cq9vueHh3zVs leisure 3
2019-08-05 14:03:00+03:00 iGyXetHE3S8u Cq9vueHh3zVs leisure 17
2019-08-05 14:02:00+03:00 iGyXetHE3S8u Cq9vueHh3zVs system 9

We see that the bins are indeed 1-minutes bins, however, they are adjusted to fixed, predetermined intervals, i.e. the bin does not start on the time of the first datapoint. Instead, pandas starts the binning at 00:00:00 of everyday and counts 1-minutes intervals from there.

If we want the binning to start from the first datapoint in our dataset, we need the origin parameter and a for loop.

[13]:
users = list(data['user'].unique())
results = []
for user in users:
    start_time = data[data["user"]==user].index.min()
    config={"app_column_name":"application_name","resample_args":{"rule":"1T","origin":start_time}}
    results.append(app.app_count(data[data["user"]==user],bat_data[bat_data["user"]==user], screen_data[screen_data["user"]==user], config))
my_app_count = pd.concat(results)
[14]:
my_app_count
[14]:
user device app_group count
datetime
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 86
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs leisure 20
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs system 15
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs utility 4
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs work 7

Compare the timestamps and notice the small difference in this example. In the cell 21, the first timestamp is at 14:02:00, whereas in the new app_count, the first timestamp is at 14:02:42

The functions can also be called in absence of battery or screen data. In this case the function does not account for when the screen is turned off or then the battery is depleted.

[15]:
empty_bat = pd.DataFrame()
empty_screen = pd.DataFrame()
no_bat = app.app_count(data, empty_bat, screen_data, config) #no battery information
no_screen = app.app_count(data, bat_data, empty_screen, config) #no screen information
no_bat_no_screen = app.app_count(data, empty_bat, empty_screen, config) #no battery and no screen information
[16]:
no_bat.head()
[16]:
user device app_group count
datetime
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 86
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs leisure 20
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs system 15
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs utility 4
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs work 7
[17]:
no_screen.head()
[17]:
user device app_group count
datetime
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 86
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs leisure 20
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs system 15
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs utility 4
2019-08-05 14:02:42.009999872+03:00 iGyXetHE3S8u Cq9vueHh3zVs work 7

We see some small differences between these two dataframes. For example, the no_screen dataframe includes the app_group “off”, as it has taken into account the battery data and knows when the phone has been shut down.

4.2 Extract features using the wrapper

We can use niimpy’s ready-made wrapper to extract one or several features at the same time. The wrapper will require two inputs: - (mandatory) dataframe that must comply with the minimum requirements (see ‘* TIP! Data requirements above) - (optional) an argument dictionary for wrapper

4.2.1 The argument dictionary for wrapper (or how we specify the way the wrapper works)

This argument dictionary will use dictionaries created for stand-alone functions. If you do not know how to create those argument dictionaries, please read the section 4.1.1 The argument dictionary for stand-alone functions (or how we specify the way a function works) first.

The wrapper dictionary is simple. Its keys are the names of the features we want to compute. Its values are argument dictionaries created for each stand-alone function we will employ. Let’s see some examples of wrapper dictionaries:

[18]:
wrapper_features1 = {app.app_count:{"app_column_name":"application_name", "resample_args":{"rule":"1T"}},
            app.app_duration:{"app_column_name":"some_name", "screen_column_name":"screen_name", "battery_column_name":"battery_name", "resample_args":{"rule":"1T"}}}
  • wrapper_features1 will be used to analyze two features, app_count and app_duration. For the feature app_count, we will use the data stored in the column application_name in our dataframe and the data will be binned in one-minute periods. For the feature app_duration, we will use the data stored in the column some_name in our dataframe and the data will be binned in one day periods. In addition, we will also employ screen and battery data which are stored in the columns screen_name and battery_name.

[19]:
wrapper_features2 = {app.app_count:{"app_column_name":"application_name", "resample_args":{"rule":"1T", "offset":"15S"}},
            app.app_duration:{"app_column_name":"some_name", "screen_column_name":"screen_name", "battery_column_name":"battery_name", "resample_args":{"rule":"30S"}}}
  • wrapper_features2 will be used to analyze two features, app_count and app_duration. For the feature app_count, we will use the data stored in the column application_name in our dataframe and the data will be binned in one-minute periods with a 15-seconds offset. For the feature app_duration, we will use the data stored in the column some_name in our dataframe and the data will be binned in 30-second periods. In addition, we will also employ screen and battery data which are stored in the columns screen_name and battery_name.

Default values: if no arguments are passed, niimpy’s default values are “application_name” for the app_column_name, “screen_status” for the screen_column_name, “battery_status” for the battery_column_name, and 30-min aggregation bins. Moreover, the wrapper will compute all the available functions in absence of the argument dictionary. Similarly to the use of functions, we may input empty dataframes if we do not have screen or battery data.

4.2.2 Using the wrapper

Now that we understand how the wrapper is customized, it is time we compute our first application feature using the wrapper. Suppose that we are interested in extracting the call total duration every 30 seconds. We will need niimpy’s extract_features_apps function, the data, and we will also need to create a dictionary to customize our function. Let’s create the dictionary first

[20]:
wrapper_features1 = {app.app_count:{"app_column_name":"application_name", "resample_args":{"rule":"30S"}}}

Now let’s use the wrapper

[21]:
results_wrapper = app.extract_features_app(data, bat_data, screen_data, features=wrapper_features1)
results_wrapper.head(5)
computing <function app_count at 0x7fa65f373380>...
[21]:
user device app_group count
datetime
2019-08-05 14:02:30+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 28
2019-08-05 14:03:00+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 34
2019-08-05 14:03:30+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 24
2019-08-05 14:02:30+03:00 iGyXetHE3S8u Cq9vueHh3zVs leisure 3
2019-08-05 14:03:00+03:00 iGyXetHE3S8u Cq9vueHh3zVs leisure 15

Our first attempt was succesful. Now, let’s try something more. Let’s assume we want to compute the app_count and app_duration in 20-seconds bins. Moreover, let’s assume we do not want to use the screen or battery data this time. Note that the app_duration values are in seconds.

[22]:
wrapper_features2 = {app.app_count:{"app_column_name":"application_name", "resample_args":{"rule":"20S"}},
                     app.app_duration:{"app_column_name":"application_name", "resample_args":{"rule":"20S"}}}
results_wrapper = app.extract_features_app(data, empty_bat, empty_screen, features=wrapper_features2)
results_wrapper.head(5)
computing <function app_count at 0x7fa65f373380>...
computing <function app_duration at 0x7fa65f373420>...
[22]:
user device app_group count duration
datetime
2019-08-05 14:02:40+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 28 600.0
2019-08-05 14:03:00+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 20 66.0
2019-08-05 14:03:20+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 31 -719.0
2019-08-05 14:03:40+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 7 -206.0
2019-08-05 14:02:40+03:00 iGyXetHE3S8u Cq9vueHh3zVs leisure 3 93.0

Great! Another successful attempt. We see from the results that more columns were added with the required calculations. We also see that some durations are in negative numbers, this may be due to the lack of screen and battery data. This is how the wrapper works when all features are computed with the same bins. Now, let’s see how the wrapper performs when each function has different binning requirements. Let’s assume we need to compute the app_count every 20 seconds, and the app_duration every 10 seconds with an offset of 5 seconds.

[23]:
wrapper_features3 = {app.app_count:{"app_column_name":"application_name", "resample_args":{"rule":"20S"}},
                     app.app_duration:{"app_column_name":"application_name", "resample_args":{"rule":"10S", "offset":"5S"}}}
results_wrapper = app.extract_features_app(data, bat_data, screen_data, features=wrapper_features3)
results_wrapper.head(5)
computing <function app_count at 0x7fa65f373380>...
computing <function app_duration at 0x7fa65f373420>...
[23]:
user device app_group count duration
datetime
2019-08-05 14:02:40+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 28.0 NaN
2019-08-05 14:03:00+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 20.0 NaN
2019-08-05 14:03:20+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 31.0 NaN
2019-08-05 14:03:40+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 7.0 NaN
2019-08-05 14:02:40+03:00 iGyXetHE3S8u Cq9vueHh3zVs leisure 3.0 NaN
[24]:
results_wrapper.tail(5)
[24]:
user device app_group count duration
datetime
2019-08-05 14:02:45+03:00 iGyXetHE3S8u Cq9vueHh3zVs work NaN 1.0
2019-08-05 14:02:55+03:00 iGyXetHE3S8u Cq9vueHh3zVs work NaN 3.0
2019-08-05 14:03:05+03:00 iGyXetHE3S8u Cq9vueHh3zVs work NaN 0.0
2019-08-05 14:03:15+03:00 iGyXetHE3S8u Cq9vueHh3zVs work NaN 2.0
2019-08-05 14:03:25+03:00 iGyXetHE3S8u Cq9vueHh3zVs work NaN 0.0

The output is once again a dataframe. In this case, two aggregations are shown. The first one is the 20-seconds aggregation computed for the app_count feature (head). The second one is the 10-seconds aggregation period with 5-seconds offset for the app_duration (tail). Because the app_count feature is not required to be aggregated every 10 seconds, the aggregation timestamps have a NaN value. Similarly, because the app_duration is not required to be aggregated in 20-seconds windows, its values are NaN for all subjects.

4.2.3 Wrapper and its default option

The default option will compute all features in 30-minute aggregation windows. To use the extract_features_apps function with its default options, simply call the function.

[25]:
default = app.extract_features_app(data, bat_data, screen_data, features=None)
computing <function app_count at 0x7fa65f373380>...
computing <function app_duration at 0x7fa65f373420>...

The function prints the computed features so you can track its process. Now, let’s have a look at the outputs

[26]:
default.head()
[26]:
user device app_group count duration
datetime
2019-08-05 14:00:00+03:00 iGyXetHE3S8u Cq9vueHh3zVs comm 86 37.0
2019-08-05 14:00:00+03:00 iGyXetHE3S8u Cq9vueHh3zVs leisure 20 7.0
2019-08-05 14:00:00+03:00 iGyXetHE3S8u Cq9vueHh3zVs system 15 7.0
2019-08-05 14:00:00+03:00 iGyXetHE3S8u Cq9vueHh3zVs utility 4 2.0
2019-08-05 14:00:00+03:00 iGyXetHE3S8u Cq9vueHh3zVs work 7 6.0

5. Implementing own features

If none of the provided functions suits well, We can implement our own customized features easily. To do so, we need to define a function that accepts a dataframe and returns a dataframe. The returned object should be indexed by user and app_groups (multiindex). To make the feature readily available in the default options, we need add the app prefix to the new function (e.g. app_my-new-feature). Let’s assume we need a new function that computes the maximum duration. Let’s first define the function.

[27]:
import numpy as np
def app_max_duration(df, bat, screen, config=None):
    if not "group_map" in config.keys():
        config['group_map'] = app.MAP_APP
    if not "resample_args" in config.keys():
        config["resample_args"] = {"rule":"30T"}

    df2 = app.classify_app(df, config)
    df2['duration']=np.nan
    df2['duration']=df2['datetime'].diff()
    df2['duration'] = df2['duration'].shift(-1)
    thr = pd.Timedelta('10 hours')
    df2 = df2[~(df2.duration>thr)]
    df2 = df2[~(df2.duration>thr)]
    df2["duration"] = df2["duration"].dt.total_seconds()

    df2.dropna(inplace=True)

    if len(df2)>0:
        df2['datetime'] = pd.to_datetime(df2['datetime'])
        df2.set_index('datetime', inplace=True)
        result = df2.groupby(["user","app_group"])["duration"].resample(**config["resample_args"]).max()

    return result.reset_index(["user","app_group"])

Then, we can call our new function in the stand-alone way or using the extract_features_app function. Because the stand-alone way is the common way to call functions in python, we will not show it. Instead, we will show how to integrate this new function to the wrapper. Let’s read again the data and assume we want the default behavior of the wrapper.

[28]:
customized_features = app.extract_features_app(data, bat_data, screen_data, features={app_max_duration: {}})
computing <function app_max_duration at 0x7fa65f3fefc0>...
[29]:
customized_features.head()
[29]:
user app_group duration
datetime
2019-08-05 14:00:00+03:00 iGyXetHE3S8u comm 59.0
2019-08-05 14:00:00+03:00 iGyXetHE3S8u leisure 36.0
2019-08-05 14:00:00+03:00 iGyXetHE3S8u system 53.0
2019-08-05 14:00:00+03:00 iGyXetHE3S8u utility 30.0
2019-08-05 14:00:00+03:00 iGyXetHE3S8u work 19.0
[51]:
import pandas as pd
from pandas import Timestamp
import numpy as np
import pytest

import niimpy
from niimpy.preprocessing.util import TZ

df11 = pd.DataFrame(
    {"user": ['wAzQNrdKZZax'] * 3 + ['Afxzi7oI0yyp'] * 3 + ['lb983ODxEFUD'] * 4,
     "device": ['iMTB2alwYk1B'] * 3 + ['3Zkk0bhWmyny'] * 3 + ['n8rndM6J5_4B'] * 4,
     "time": [1547709614.05, 1547709686.036, 1547709722.06, 1547710540.99, 1547710688.469, 1547711339.439,
              1547711831.275, 1547711952.182, 1547712028.281, 1547713932.182],
     "battery_level": [96, 96, 95, 95, 94, 93, 94, 94, 94, 92],
     "battery_status": ['3'] * 5 + ['-2', '2', '3', '-2', '2'],
     "battery_health": ['2'] * 10,
     "battery_adaptor": ['0'] * 5 + ['1', '1', '0', '0', '1'],
     "datetime": ['2019-01-17 09:20:14.049999872+02:00', '2019-01-17 09:21:26.036000+02:00',
                  '2019-01-17 09:22:02.060000+02:00',
                  '2019-01-17 09:35:40.990000128+02:00', '2019-01-17 09:38:08.469000192+02:00',
                  '2019-01-17 09:48:59.438999808+02:00',
                  '2019-01-17 09:57:11.275000064+02:00', '2019-01-17 09:59:12.181999872+02:00',
                  '2019-01-17 10:00:28.280999936+02:00', '2019-01-17 10:32:12.181999872+02:00'
                 ]
     })
df11['datetime'] = pd.to_datetime(df11['datetime'])
df11 = df11.set_index('datetime', drop=False)

df = df11.copy()
k = niimpy.preprocessing.battery.battery_gaps
gaps = niimpy.preprocessing.battery.extract_features_battery(df, features={k: {}})

gaps


<function battery_gaps at 0x7fa65f372340> {}

[51]:
battery_gap
datetime user device
2019-01-17 09:30:00+02:00 Afxzi7oI0yyp 3Zkk0bhWmyny 0 days 00:04:26.149666560
lb983ODxEFUD n8rndM6J5_4B 0 days 00:01:00.453499904
2019-01-17 10:00:00+02:00 lb983ODxEFUD n8rndM6J5_4B 0 days 00:01:16.099000064
2019-01-17 10:30:00+02:00 lb983ODxEFUD n8rndM6J5_4B 0 days 00:31:43.900999936
2019-01-17 09:00:00+02:00 wAzQNrdKZZax iMTB2alwYk1B 0 days 00:00:36.003333376