%0 Journal Article %@ 2369-1999 %I JMIR Publications %V 9 %N %P e47646 %T Open-Source, Step-Counting Algorithm for Smartphone Data Collected in Clinical and Nonclinical Settings: Algorithm Development and Validation Study %A Straczkiewicz,Marcin %A Keating,Nancy L %A Thompson,Embree %A Matulonis,Ursula A %A Campos,Susana M %A Wright,Alexi A %A Onnela,Jukka-Pekka %+ Department of Biostatistics, Harvard T.H. Chan School of Public Health, 677 Huntington Ave, Boston, MA, 02115, United States, 1 617 495 1000, mstraczkiewicz@hsph.harvard.edu %K accelerometer %K cancer %K open-source %K smartphone %K step count %K validation %K wearable %D 2023 %7 15.11.2023 %9 Original Paper %J JMIR Cancer %G English %X Background: Step counts are increasingly used in public health and clinical research to assess well-being, lifestyle, and health status. However, estimating step counts using commercial activity trackers has several limitations, including a lack of reproducibility, generalizability, and scalability. Smartphones are a potentially promising alternative, but their step-counting algorithms require robust validation that accounts for temporal sensor body location, individual gait characteristics, and heterogeneous health states. Objective: Our goal was to evaluate an open-source, step-counting method for smartphones under various measurement conditions against step counts estimated from data collected simultaneously from different body locations (“cross-body” validation), manually ascertained ground truth (“visually assessed” validation), and step counts from a commercial activity tracker (Fitbit Charge 2) in patients with advanced cancer (“commercial wearable” validation). Methods: We used 8 independent data sets collected in controlled, semicontrolled, and free-living environments with different devices (primarily Android smartphones and wearable accelerometers) carried at typical body locations. A total of 5 data sets (n=103) were used for cross-body validation, 2 data sets (n=107) for visually assessed validation, and 1 data set (n=45) was used for commercial wearable validation. In each scenario, step counts were estimated using a previously published step-counting method for smartphones that uses raw subsecond-level accelerometer data. We calculated the mean bias and limits of agreement (LoA) between step count estimates and validation criteria using Bland-Altman analysis. Results: In the cross-body validation data sets, participants performed 751.7 (SD 581.2) steps, and the mean bias was –7.2 (LoA –47.6, 33.3) steps, or –0.5%. In the visually assessed validation data sets, the ground truth step count was 367.4 (SD 359.4) steps, while the mean bias was –0.4 (LoA –75.2, 74.3) steps, or 0.1%. In the commercial wearable validation data set, Fitbit devices indicated mean step counts of 1931.2 (SD 2338.4), while the calculated bias was equal to –67.1 (LoA –603.8, 469.7) steps, or a difference of 3.4%. Conclusions: This study demonstrates that our open-source, step-counting method for smartphone data provides reliable step counts across sensor locations, measurement scenarios, and populations, including healthy adults and patients with cancer. %M 37966891 %R 10.2196/47646 %U https://cancer.jmir.org/2023/1/e47646 %U https://doi.org/10.2196/47646 %U http://www.ncbi.nlm.nih.gov/pubmed/37966891