Difference between revisions of "Journal:Generalized procedure for screening free software and open-source software applications/Literature review"

From LIMSWiki
Jump to navigationJump to search
m (Links)
m (→‎Abstract: Internal link)
 
(6 intermediate revisions by the same user not shown)
Line 16: Line 16:
|issn        =  
|issn        =  
|license      = [https://creativecommons.org/licenses/by-sa/4.0/ Creative Commons Attribution-ShareAlike 4.0 International]
|license      = [https://creativecommons.org/licenses/by-sa/4.0/ Creative Commons Attribution-ShareAlike 4.0 International]
|website      =  
|website      = [[Journal:Generalized procedure for screening free software and open-source software applications/Print version|Print-friendly version]]
|download    =  
|download    = [https://drive.google.com/file/d/0B1hujIvHpfUwQjR0UlgtcGNCYU0/view?usp=sharing PDF] (Note: Inline references fail to load in PDF version)
}}
}}


==Abstract==
==Abstract==
Free software and [[:Category:Open-source software|open-source software projects]] have become a popular alternative tool in both scientific research and other fields. However, selecting the optimal application for use in a project can be a major task in itself, as the list of potential applications must first be identified and screened to determine promising candidates before an in-depth analysis of systems can be performed. To simplify this process, we have initiated a project to generate a library of in-depth reviews of free software and open-source software applications. Preliminary to beginning this project, a review of evaluation methods available in the literature was performed. As we found no one method that stood out, we synthesized a general procedure using a variety of available sources for screening a designated class of applications to determine which ones to evaluate in more depth. In this paper, we examine a number of currently published processes to identify their strengths and weaknesses. By selecting from these processes we synthesize a proposed screening procedure to triage available systems and identify those most promising of pursuit. To illustrate the functionality of this technique, this screening procedure is executed against a selected class of applications.
Free software and [[:Category:Open-source software|open-source software projects]] have become a popular alternative tool in both scientific research and other fields. However, selecting the optimal application for use in a project can be a major task in itself, as the list of potential applications must first be identified and screened to determine promising candidates before an in-depth analysis of systems can be performed. To simplify this process, we have initiated a project to generate a library of in-depth reviews of [[Free and open-source software|free software and open-source software]] applications. Preliminary to beginning this project, a review of evaluation methods available in the literature was performed. As we found no one method that stood out, we synthesized a general procedure using a variety of available sources for screening a designated class of applications to determine which ones to evaluate in more depth. In this paper, we examine a number of currently published processes to identify their strengths and weaknesses. By selecting from these processes we synthesize a proposed screening procedure to triage available systems and identify those most promising of pursuit. To illustrate the functionality of this technique, this screening procedure is executed against a selected class of applications.


==Introduction==
==Introduction==

Latest revision as of 16:15, 19 January 2016

Full article title Generalized Procedure for Screening Free Software and Open Source Software Applications
Author(s) Joyce, John
Author affiliation(s) Arcana Informatica; Scientific Computing
Primary contact Email: jrjoyce@gmail.com
Year published 2015
Distribution license Creative Commons Attribution-ShareAlike 4.0 International
Website Print-friendly version
Download PDF (Note: Inline references fail to load in PDF version)

Abstract

Free software and open-source software projects have become a popular alternative tool in both scientific research and other fields. However, selecting the optimal application for use in a project can be a major task in itself, as the list of potential applications must first be identified and screened to determine promising candidates before an in-depth analysis of systems can be performed. To simplify this process, we have initiated a project to generate a library of in-depth reviews of free software and open-source software applications. Preliminary to beginning this project, a review of evaluation methods available in the literature was performed. As we found no one method that stood out, we synthesized a general procedure using a variety of available sources for screening a designated class of applications to determine which ones to evaluate in more depth. In this paper, we examine a number of currently published processes to identify their strengths and weaknesses. By selecting from these processes we synthesize a proposed screening procedure to triage available systems and identify those most promising of pursuit. To illustrate the functionality of this technique, this screening procedure is executed against a selected class of applications.

Introduction

Results

Literature review

A search of the literature returns thousands of papers related to open-source software, but most are of limited value in regards to the scope of this project. The need for a process to assist in selecting between open-source projects is mentioned in a number of these papers, and there appear to be over a score of different published procedures. Regrettably, none of these methodologies appear to have gained large-scale support in the industry.

Stol and Babar have published a framework for comparing evaluation methods targeting open-source software and include a comparison of 20 of them.[1] They noted that web sites that simply consisted of a suggestion list for selecting an open-source application were not included in this comparison. This selection difficulty is nothing new with FLOSS applications. In their 1994 paper, Fritz and Carter review over a dozen existing selection methodologies, covering their strengths, weaknesses, the mathematics used, as well as other factors involved.[2]

No. Name Year Orig Method
1 Capgemini Open Source Maturity Model 2003 I Yes
2 Evaluation Framework for Open Source Software 2004 R No
3 A Model for Comparative Assessment of Open Source Products 2004 R Yes
4 Navica Open Source Maturity Model 2004 I Yes
5 Woods and Guliani's OSMM 2005 I No
6 Open Business Readiness Rating (OpenBRR)[3][4] 2005 R/I Yes
7 Atos Origin Method for Qualification and Selection of Open Source Software (QSOS) 2006 I Yes
8 Evaluation Criteria for Free/Open Source Software Products 2006 R No
9 A Quality Model for OSS Selection 2007 R No
10 Selection Process of Open Source Software 2007 R Yes
11 Observatory for Innovation and Technological transfer on Open Source software (OITOS) 2007 R Yes
12 Framework for OS Critical Systems Evaluation (FOCSE) 2007 R No
13 Balanced Scorecards for OSS 2007 R No
14 Open Business Quality Rating (OpenBQR) 2007 R Yes
15 Evaluating OSS through Prototyping 2007 R Yes
16 A Comprehensive Approach for Assessing Open Source Projects 2008 R No
17 Software Quality Observatory for Open Source Software (SQO-OSS) 2008 R Yes
18 An operational approach for selecting open source components in a software development project[5] 2008 R No
19 QualiPSo trustworthiness model 2008 R No
20 OpenSource Maturity Model (OMM)[6] 2009 R No

Table 1.: Comparison frameworks and methodologies for examination of FLOSS applications extracted from Stol and Babar.[1] The selection
procedure is described in Stol's and Barbar's paper, however, 'Year' indicates the date of publication, 'Orig.' indicates whether the described
process originated in industry (I) or research (R), while 'Method' indicates whether the paper describes a formal analysis method and procedure (Yes)
or just a list of evaluation criteria (No).

Extensive comparisons between some of these methods have also been published, such as Deprez's and Alexandre's comparative assessment of the OpenBRR and QSOS techniques.[7] Wasserman and Pal have also published a paper under the title of "Evaluating Open Source Software," which appears to be more of an updated announcement and in-depth description of the Business Readiness Rating (BRR) framework.[8] Jadhav and Sonar have also examined the issue of both evaluating and selecting software packages. They include a helpful analysis of the strengths and weaknesses of the various techniques.[9] Perhaps more importantly, they clearly point out that there is no common list of evaluation criteria. While the majority of the articles they reviewed listed the criteria used, Jadhav and Sonar indicated that these criteria frequently did not include a detailed definition, which required each evaluator to use their own, sometimes conflicting, interpretation.

Since the publication of Stol and Babar's paper, additional evaluation methods have been published. Of particular interest are a series of papers by Pani et al. describing their proposed FAME (Filter, Analyze, Measure and Evaluate) methodology.[10][11][12][13] In their "Transferring FAME" paper, they emphasized that all of the evaluation frameworks previously described in the published literature were frequently not easy to apply to real environments, as they were developed using an analytic research approach which incorporated a multitude of factors.[13]

Their stated design objective with FAME is to reduce the complexity of performing the application evaluation, particularly for small organizations. They specify "[t]he goals of FAME methodology are to aid the choice of high-quality F/OSS products, with high probability to be sustainable in the long term, and to be as simple and user friendly as possible." They further state that "[t]he main idea behind FAME is that the users should evaluate which solution amongst those available is more suitable to their needs by comparing technical and economical factors, and also taking into account the total cost of individual solutions and cash outflows. It is necessary to consider the investment in its totality and not in separate parts that are independent of one another."[13]

This paper breaks the FAME methodology into four activities:

  1. Identify the constraints and risks of the projects
  2. Identify user requirements and rank
  3. Identify and rank all key objectives of the project
  4. Generate a priority framework to allow comparison of needs and features

Their paper includes a formula for generating a score from the information collected. The evaluated system with the highest "major score," Pjtot, indicates the system selected. While it is a common practice to define an analysis process which condenses all of the information gathered into a single score, I highly caution against blindly accepting such a score. FAME, as well as a number or the other assessment methodologies, is designed for iterative use. The logical purpose of this is to allow the addition of factors initially overlooked into your assessment, as well as to change the weighting of existing factors as you reevaluate their importance. However, this feature means that it is also very easy to unconsciously, or consciously, skew the results of the evaluation to select any system you wish. Condensing everything down into a single value also strips out much of the information that you have worked so hard to gather. Note that you can generate the same result score using significantly different input values. While of value, selecting a system based on just the highest score could potentially leave you with a totally unworkable system.

Pani et al. also describe a FAMEtool to assist in this data gathering and evaluation.[12] However, a general web search as well as a review of their FAME papers revealed no indication of how to obtain this resource. While this paper includes additional comparisons with other FLOSS analysis methodologies, and there are some hints suggesting that the FAMEtool is being provided as a web service, I have found no URL specified for it. As of now, I have received no responses from the research team via either e-mail or Skype regarding FAME, the FAMEtool, or feedback on its use.

During this same time frame Soto and Ciolkowski also published papers describing the QualOSS Open Source Assessment Model and compared it to a number of the procedures in Stol and Barbar's table.[14][15] Their focus was primarily on three process perspectives: product quality, process maturity, and sustainability of the development community. Due to the lack of anything more than a rudimentary process perspective examination, they felt that the following OSS project assessment models were unsatisfactory: QSOS, CapGemni OSMM, Navica OSMM, and OpenBRR. They position QualOSS as an extension of the tralatitious CMMI and SPICE process maturity models. While there are multiple items in the second paper that are worth incorporating into an in-depth evaluation process, they do not seem suitable for what is intended as a quick survey.

Another paper, published by Haaland and Groven, also compared a number of open source quality models. To this paper's credit, the authors devoted a significant amount of space to discussing the different definitions of quality and how the target audience of a tool might affect which definition was used.[16] Like Stohl and Babar, they listed a number of the quality assessment models to choose from, including OSMM, QSOS, OpenBRR, and others. For their comparison, they selected OpenBRR and QualOSS. They appear to have classified OpenBRR as a first generation tool with a "[us]ser view on quality" and QualOSS as a second generation tool with a "business point of view." An additional variation is that OpenBRR is primarily a manual tool, while QualOSS is primarily an automated tool. Their analysis in this article clearly demonstrates the steps involve in using these tools and in highlighting where they are objective and subjective. While they were unable to answer their original question as to whether the first- or second-generation tools did a better job of evaluation, to me they answered the following even more important but unasked question. As they proceeded through their evaluation, it became apparent as to how much the questions defined in the methods could affect the results of the evaluations. Even though the authors might have considered the questions to be objective, I could readily see how some of these questions could be interpreted in alternate ways. My takeaway is an awareness of the potential danger of using rigid tools, as they can skew the accuracy of the evaluation results depending on exactly what you want the evaluated application to do and how you plan to use it. These models can be very useful guides, but they should not be used to replace a carefully considered evaluation as there will always be factors influencing the selection decision which did not occur to anyone when the specifications were being written.

Hauge et al. have noted that despite the development of several normative methods of assessment, empirical studies have failed to show wide spread adoption of these methods.[17] From their survey of a number of Norwegian software companies, they have noticed a tendency for selectors to skip the in-depth search for what they call the "best fit" application and fall back on what they refer to as a '"first fit." This is an iterative procedure with the knowledge gained from the failure of one set of component tests being incorporated into the evaluation of the next one. Their recommendation is for researchers to stop attempting to develop either general evaluation schemas or normative selection methods which would be applicable to any software application and instead focus on identifying situationally sensitive factors which can be used as evaluation criteria. This is a very rational approach as all situations, even if evaluating the same set of applications is going to be different, as each user's needs are different.

Ayalal et al. have performed a study to try to more accurately determine why more people don't take advantage of the various published selection methodologies.[18] While they looked at a number of factors and identified several possible problems, one of the biggest factors was the difficulty in obtaining the needed information for the evaluation. Based on the projects they studied, many did not provide a number of the basic pieces of information required for the evaluation, or perhaps worse, required extensive examination of the project web site and documentation to retrieve the required information. From her paper, it sounds as if this issue was more of a communication breakdown than an attempt to hide any of the information, not that this had any impact on the inaccessibility of the information.

In addition to the low engagement rates for the various published evaluation methods, another concern is the viability of the sponsoring organizations. One of the assessment papers indicated that the published methods with the smallest footprint, or the easiest to use, appeared to be FAME and the OpenBRR. I have already mentioned my difficulty obtaining additional information regarding FAME, and OpenBRR appears to be even more problematic. BRR was first registered on SourceForge in September of 2005[19], and an extensive Request For Comments from the founding members of the BRR consortium (SpikeSource, the Center for Open Source Investigation at Carnegie Mellon West, and Intel Corporation) was released.[3] In 2006, in contrast to typical Open Source development groups, the OpenBRR group announced the formation of an OpenBRR Corporate Community group. Peter Galli's story indicates that "the current plan is that membership will not be open to all."[20] He quotes Murugan Pal saying "membership will be on an invitation-only basis to ensure that only trusted participants are coming into the system." However, for some reason, at least some in the group "expressed concern and unhappiness about the idea of the information discussed not being shared with the broader open-source community."[20]

While the original Business Readiness Rating web site still exists, it is currently little more than a static web page.[21] It appears that some of the original information posted on the site is still there, you just have to know what its URL is to access it, as the original links on the web site have been removed. Otherwise, you may have to turn to the Internet Archive to retrieve some of their documentation. The lack of any visible activity regarding OpenBRR prompted a blog post from one graduate student in 2012 asking "What happened to OpenBRR (Business Readiness Rating for Open Source)?"[22]

It appears that at some point any development activity regarding OpenBRR was morphed into OSSpal.[23] However, background information on this project is sparse as well. While the site briefly mentions that OSSpal incorporates a number of lessons learned from BRR, there is very little additional information regarding the group or the methods procedures. Their "All Projects" tab provides a list of over 30 open-source projects, but the majority simply show "No votes yet" under the various headings. In fact, as of now, the only projects showing any input at all are for Ubuntu and Mozilla Firefox.

Initial evaluation and selection recommendations

In-depth evaluation

Completing the evaluation

Summary

Glossary

References

  1. 1.0 1.1 Stol, Klaas-Jan; Ali Babar, Muhammad (2010). "A Comparison Framework for Open Source Software Evaluation Methods". In Ågerfalk, P.J.; Boldyreff, C.; González-Barahona, J.M.; Madey, G.R.; Noll, J. Open Source Software: New Horizons. Springer. pp. 389–394. doi:10.1007/978-3-642-13244-5_36. ISBN 9783642132445. 
  2. Fritz, Catherine A.; Carter, Bradley D. (23 August 1994). A Classification And Summary Of Software Evaluation And Selection Methodologies. Mississippi State, MS: Department of Computer Science, Mississippi State University. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.4470. 
  3. 3.0 3.1 "OpenBRR, Business Readiness Rating for Open Source: A Proposed Open Standard to Facilitate Assessment and Adoption of Open Source Software" (PDF). OpenBRR. 2005. http://docencia.etsit.urjc.es/moodle/file.php/125/OpenBRR_Whitepaper.pdf. Retrieved 13 April 2015. 
  4. Wasserman, A.I.; Pal, M.; Chan, C. (10 June 2006). "The Business Readiness Rating: a Framework for Evaluating Open Source" (PDF). Proceedings of the Workshop on Evaluation Frameworks for Open Source Software (EFOSS) at the Second International Conference on Open Source Systems. Lake Como, Italy. pp. 1–5. Archived from the original on 11 January 2007. http://web.archive.org/web/20070111113722/http://www.openbrr.org/comoworkshop/papers/WassermanPalChan_EFOSS06.pdf. Retrieved 15 April 2015. 
  5. Majchrowski, Annick; Deprez, Jean-Christophe (2008). "An Operational Approach for Selecting Open Source Components in a Software Development Project". In O'Connor, R.; Baddoo, N.; Smolander, K.; Messnarz, R.. Software Process Improvement. Springer. pp. 176–188. doi:10.1007/978-3-540-85936-9_16. ISBN 9783540859369. 
  6. Petrinja, E.; Nambakam, R.; Sillitti, A. (2009). "Introducing the Open Source Maturity Model". ICSE Workshop on Emerging Trends in Free/Libre/Open Source Software Research and Development, 2009. IEEE. pp. 37–41. doi:10.1109/FLOSS.2009.5071358. ISBN 9781424437207. 
  7. Deprez,Jean-Christophe; Alexandre, Simon (2008). "Comparing Assessment Methodologies for Free/Open Source Software: OpenBRR and QSOS". In Jedlitschka, Andreas; Salo, Outi. Product-Focused Software Process Improvement. Springer. pp. 189-203. doi:10.1007/978-3-540-69566-0_17. ISBN 9783540695660. 
  8. Wasserman, Anthony I.; Pal, Murugan (2010). "Evaluating Open Source Software" (PDF). Carnegie Mellon University - Silicon Valley. Archived from the original on 18 February 2015. https://web.archive.org/web/20150218173146/http://oss.sv.cmu.edu/readings/EvaluatingOSS_Wasserman.pdf. Retrieved 31 May 2015. 
  9. Jadhav, Anil S.; Sonar, Rajendra M. (March 2009). "Evaluating and selecting software packages: A review". Information and Software Technology 51 (3): 555–563. doi:10.1016/j.infsof.2008.09.003. 
  10. Pani, F.E.; Sanna, D. (11 June 2010). "FAME, A Methodology for Assessing Software Maturity". Atti della IV Conferenza Italiana sul Software Libero. Cagliari, Italy. 
  11. Pani, F.E.; Concas, G.; Sanna, D.; Carrogu, L. (2010). "The FAME Approach: An Assessing Methodology". In Niola, V.; Quartieri, J.; Neri, F.; Caballero, A.A.; Rivas-Echeverria, F.; Mastorakis, N. (PDF). Proceedings of the 9th WSEAS International Conference on Telecommunications and Informatics. Stevens Point, WI: WSEAS. ISBN 9789549260021. http://www.wseas.us/e-library/conferences/2010/Catania/TELE-INFO/TELE-INFO-10.pdf. 
  12. 12.0 12.1 Pani, F.E.; Concas, G.; Sanna, S.; Carrogu, L. (August 2010). "The FAMEtool: an automated supporting tool for assessing methodology" (PDF). WSEAS Transactions on Information Science and Applications 7 (8): 1078–1089. http://www.wseas.us/e-library/transactions/information/2010/88-137.pdf. 
  13. 13.0 13.1 13.2 Pani, F.E.; Sanna, D.; Marchesi, M.; Concas, G. (2010). "Transferring FAME, a Methodology for Assessing Open Source Solutions, from University to SMEs". In D'Atri, A.; De Marco, M.; Braccini, A.M.; Cabiddu, F.. Management of the Interconnected World. Springer. pp. 495–502. doi:10.1007/978-3-7908-2404-9_57. ISBN 9783790824049. 
  14. Soto, M.; Ciolkowski, M. (2009). "The QualOSS open source assessment model measuring the performance of open source communities". 3rd International Symposium on Empirical Software Engineering and Measurement, 2009. IEEE. pp. 498-501. doi:10.1109/ESEM.2009.5314237. ISBN 9781424448425. 
  15. Soto, M.; Ciolkowski, M. (2009). "The QualOSS Process Evaluation: Initial Experiences with Assessing Open Source Processes". In O'Connor, R.; Baddoo, N.; Cuadrado-Gallego, J.J.; Rejas Muslera, R.; Smolander, K.; Messnarz, R.. Software Process Improvement. Springer. pp. 105–116. doi:10.1007/978-3-642-04133-4_9. ISBN 9783642041334. 
  16. Haaland, Kirsten; Groven, Arne-Kristian; Glott, Ruediger; Tannenberg, Anna (1 July 2010). "Free/Libre Open Source Quality Models - a comparison between two approaches" (PDF). 4th FLOSS International Workshop on Free/Libre Open Source Software. Jena, Germany. pp. 1–17. http://publications.nr.no/directdownload/publications.nr.no/5444/Haaland_-_Free_Libre_Open_Source_Quality_Models-_a_compariso.pdf. Retrieved 15 April 2015. 
  17. Hauge, O.; Osterlie, T.; Sorensen, C.-F.; Gerea, M. (2009). "An empirical study on selection of Open Source Software - Preliminary results". ICSE Workshop on Emerging Trends in Free/Libre/Open Source Software Research and Development, 2009. IEEE. pp. 42-47. doi:10.1109/FLOSS.2009.5071359. ISBN 9781424437207. 
  18. Ayala, Claudia; Cruzes, Daniela S.; Franch, Xavier; Conradi, Reidar (2011). "Towards Improving OSS Products Selection – Matching Selectors and OSS Communities Perspectives". In Hissam, S.; Russo, B.; de Mendonça Neto, M.G.; Kon, F.. Open Source Systems: Grounding Research. Springer. pp. 244–258. doi:10.1007/978-3-642-24418-6_17. ISBN 9783642244186. 
  19. Chan, C.; enugroho; Wasserman, T. (17 April 2013). "Business Readiness Rating (BRR)". SourceForge. https://sourceforge.net/projects/openbrr/. Retrieved 21 April 2015. 
  20. 20.0 20.1 Galli, Peter (24 April 2006). "OpenBRR Launches Closed Open-Source Group". eWeek. QuinStreet, Inc. http://www.eweek.com/c/a/Linux-and-Open-Source/OpenBRR-Launches-Closed-OpenSource-Group. Retrieved 13 April 2015. 
  21. "Welcome to Business Readiness Rating: A FrameWork for Evaluating OpenSource Software". OpenBRR. Archived from the original on 24 December 2014. https://web.archive.org/web/20141224233009/http://www.openbrr.org/. Retrieved 14 April 2015. 
  22. Arjona, Laura (6 January 2012). "What happened to OpenBRR (Business Readiness Rating for Open Source)?". The Bright Side. https://larjona.wordpress.com/2012/01/06/what-happened-to-openbrr-business-readiness-rating-for-open-source/. Retrieved 13 April 2015. 
  23. "Welcome to OSSpal". OSSpal. http://osspal.org/. Retrieved 18 April 2015. 

Notes

This article has not officially been published in a journal. However, this presentation is largely faithful to the original paper. The content has been edited for grammar, punctuation, and spelling. Additional error correction of a few reference URLs and types as well as cleaning up of the glossary also occurred. Redundancies and references to entities that don't offer open-source software were removed from the FLOSS examples in Table 2. DOIs and other identifiers have been added to the references to make them more useful. This article is being made available for the first time under the Creative Commons Attribution-ShareAlike 4.0 International license, the same license used on this wiki.