Journal:Ten simple rules for developing usable software in computational biology

From LIMSWiki
Revision as of 20:07, 7 March 2017 by Shawndouglas (talk | contribs) (Created stub. Saving and adding more.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search
Full article title Ten simple rules for developing usable software in computational biology
Journal PLOS Computational Biology
Author(s) List, Markus; Ebert, Peter; Albrecht, Felipe
Author affiliation(s) Max Planck Institute for Informatics, Saarland Informatics Campus
Primary contact Email: pebert at mpi-inf dot mpg dot de
Editors Markel, Scott
Year published 2017
Volume and issue 13(1)
Page(s) e1005265
DOI 10.1371/journal.pcbi.1005265
ISSN 1553-7358
Distribution license Creative Commons Attribution 4.0 International
Website http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005265
Download http://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1005265&type=printable (PDF)

Introduction

The rise of high-throughput technologies in molecular biology has led to a massive amount of publicly available data. While computational method development has been a cornerstone of biomedical research for decades, the rapid technological progress in the wet laboratory makes it difficult for software development to keep pace. Wet lab scientists rely heavily on computational methods, especially since more research is now performed in silico. However, suitable tools do not always exist, and not everyone has the skills to write complex software. Computational biologists are required to close this gap, but they often lack formal training in software engineering. To alleviate this, several related challenges have been previously addressed in the Ten Simple Rules series, including reproducibility[1], effectiveness[2], and open-source development of software.[3][4]

Here, we want to shed light on issues concerning software usability. Usability is commonly defined as "a measure of interface quality that refers to the effectiveness, efficiency, and satisfaction with which users can perform tasks with a tool."[5] Considering the subjective nature of this topic, a broad consensus may be hard to achieve. Nevertheless, good usability is imperative for achieving wide acceptance of a software tool in the community. In many cases, academic software starts out as a prototype that solves one specific task and is not geared for a larger user group. As soon as the developer realizes that the complexity of the problems solved by the software could make it widely applicable, the software will grow to meet the new demands. At least by this point, if not sooner, usability should become a priority. Unfortunately, efforts in scientific software development are constrained by limited funding, time, and rapid turnover of group members. As a result, scientific software is often poorly documented, non-intuitive, non-robust with regards to input data and parameters, and hard to install. For many use cases, there is a plethora of tools that appear very similar and make it difficult for the user to select the one that best fits their needs. Not surprisingly, a substantial fraction of these tools are probably abandonware; i.e., these are no longer actively developed or supported in spite of their potential value to the scientific community.

To our knowledge, software development as part of scientific research is usually carried out by individuals or small teams with no more than two or three members. Hence, the responsibility of designing, implementing, testing, and documenting the code rests on few shoulders. Additionally, there is pressure to produce publishable results or, at least, to contribute analysis work to ongoing projects. Consequently, academic software is typically released as a prototype. We acknowledge that such a tool cannot adhere to and should not be judged by the standards that we take for granted for production grade software. However, widespread use of a tool is typically in the interest of a researcher. To this end, we propose 10 simple rules that, in our experience, have a considerable impact on improving usability of scientific software.

Rule 1: Identify the missing pieces

Unless you are a pioneer, and few of us are, the problem you are working on is likely addressed by existing tools. As a professional, you are aware of this software but may consider it cumbersome, non-functional, or otherwise unacceptable for your demands. Make sure that your judgment is shared by a substantial fraction of the prospective users before you start developing a new tool. Usable software should offer the features needed and behave as expected by the community. Moreover, a new tool needs to provide substantial novelty over existing solutions. For this purpose, list the requirements on the software and create a comparison table to set the new tool against existing solutions. This allows you to carve out the selling points of your tool in a systematic fashion.

Rule 2: Collect feedback from prospective users

Software can be regarded as providing the interface between wet lab science and data analysis. A lack of communication between both sides will lead to misunderstandings that need to be rectified by substantially changing the code base in a late phase of the project. Avoid this pitfall by exposing potential users to a prototype. Discussions on data formats or on the design of the user interface will reveal unforeseen challenges and help to determine if a tool is sufficiently intuitive.[6] To plan your progress, keep a record of suggested improvements and existing issues.

Rule 3: Be ready for data growth

First estimate the expected data growth in your field and then design your software accordingly. To this end, consider parallelization and make sure your tool can be integrated seamlessly in workflow management systems (e.g., GALAXY[7] and Taverna[8]), pipeline frameworks (e.g., Ruffus[9] and SnakeMake[10]), or a cluster framework (e.g., Hadoop, http://hadoop.apache.org/). Moreover, make sure that the user interface can scale to growing data volumes. For example, consider that the visualizations should still be comprehensible for larger datasets, e.g., by displaying only parts of the data or through aggregation of results.


References

  1. Sandve, G.K.; Nekrutenko, A.; Taylor, J.; Hovig, E.. "Ten simple rules for reproducible computational research". PLOS Computational Biology 9 (10): e1003285. doi:10.1371/journal.pcbi.1003285. PMC PMC3812051. PMID 24204232. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3812051. 
  2. Osborne, J.M.; Bernabeu, M.O.; Bruna, M. et al.. "Ten simple rules for effective computational research". PLOS Computational Biology 10 (3): e1003506. doi:10.1371/journal.pcbi.1003506. PMC PMC3967918. PMID 24675742. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3967918. 
  3. Prlić, A.; Procter, J.B.. "Ten simple rules for the open development of scientific software". PLOS Computational Biology 8 (12): e1002802. doi:10.1371/journal.pcbi.1002802. PMC PMC3516539. PMID 23236269. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3516539. 
  4. Perez-Riverol, Y.; Gatto, L.; Wang, R. et al.. "Ten simple rules for taking advantage of Git and GitHub". PLOS Computational Biology 12 (7): e1004947. doi:10.1371/journal.pcbi.1004947. PMC PMC4945047. PMID 27415786. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4945047. 
  5. Dillon, A. (2001). "Human Acceptance of Information Technology". In Karwowski, W.. Encyclopedia of Human Factors and Ergonomics. Taylor & Francis. pp. 673–675. ISBN 9780748408474. 
  6. Thielsch, M.T.; Engel, R.; Hirschfeld, G. (2015). "Expected usability is not a valid indicator of experienced usability". PeerJ Computer Science 1: e19. doi:10.7717/peerj-cs.19. 
  7. Giardine, B.; Riemer, C.; Hardison, R.C. et al. (2005). "Galaxy: A platform for interactive large-scale genome analysis". Genome Research 15 (10): 1451–1455. doi:10.1101/gr.4086505. PMC PMC1240089. PMID 16169926. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1240089. 
  8. Wolstencroft, K.; Haines, R.; Fellows, D. et al. (2013). "The Taverna workflow suite: Designing and executing workflows of Web Services on the desktop, web or in the cloud". Nucleic Acids Research 41 (W1): W557-W561. doi:10.1093/nar/gkt328. PMC PMC3692062. PMID 23640334. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3692062. 
  9. Goodstadt, L. (2010). "Ruffus: A lightweight Python library for computational pipelines". Bioinformatics 26 (21): 2778-9. doi:10.1093/bioinformatics/btq524. PMID 20847218. 
  10. Köster, J.; Rahmann, S. (2012). "Snakemake: A scalable bioinformatics workflow engine". Bioinformatics 28 (19): 2520-2. doi:10.1093/bioinformatics/bts480. PMID 22908215. 

Notes

This presentation is faithful to the original, with only a few minor changes to presentation. In some cases important information was missing from the references, and that information was added.