Journal:Generalized procedure for screening free software and open-source software applications/Print version
|Full article title||Generalized Procedure for Screening Free Software and Open Source Software Applications|
|Author affiliation(s)||Arcana Informatica; Scientific Computing|
|Primary contact||Email: email@example.com|
|Distribution license||Creative Commons Attribution-ShareAlike 4.0 International|
|Download||PDF (Note: Inline references fail to load in PDF version)|
Free software and open-source software projects have become a popular alternative tool in both scientific research and other fields. However, selecting the optimal application for use in a project can be a major task in itself, as the list of potential applications must first be identified and screened to determine promising candidates before an in-depth analysis of systems can be performed. To simplify this process, we have initiated a project to generate a library of in-depth reviews of free software and open-source software applications. Preliminary to beginning this project, a review of evaluation methods available in the literature was performed. As we found no one method that stood out, we synthesized a general procedure using a variety of available sources for screening a designated class of applications to determine which ones to evaluate in more depth. In this paper, we examine a number of currently published processes to identify their strengths and weaknesses. By selecting from these processes we synthesize a proposed screening procedure to triage available systems and identify those most promising of pursuit. To illustrate the functionality of this technique, this screening procedure is executed against a selected class of applications.
There is much confusion regarding free software and open-source software, and many people use these terms interchangeably. However, the connotations associated with the terms are highly significant. So perhaps we should start with an examination of the terms to clarify what we are attempting to screen. While there are many groups and organizations involved with open-source software, two of the main ones are the Free Software Foundation (FSF) and the Open Source Initiative (OSI).
When discussing free software, we are not explicitly discussing software for which no fee is charged; rather, we are referring to "free" in terms of liberty. To quote the Free Software Foundation (FSF):
A program is free software if the program's users have the four essential freedoms:
- The freedom to run the program as you wish, for any purpose (freedom 0).
- The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
- The freedom to redistribute copies so you can help your neighbor (freedom 2).
- The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
This does not mean that a program is provided at no cost, or gratis, though some of these rights imply that it would be. In the FSF's analysis, any application that does not conform to these freedoms is unethical. While there is also "free software" or "freeware" that is given away at no charge but without the source code, this would not be considered free software under the FSF definition.
The Open Source Initiative (OSI), originally formed to promote free software, refers to it as open-source software (OSS) to make it sound more business friendly. The OSI defines open-source software as any application that meets the following 10 criteria, which they based on the Debian Free Software Guidelines:
- Free redistribution
- Source code included
- Must allow derived works
- Must preserve the integrity of the author's source code
- License must not discriminate against persons or groups
- License must not discriminate against fields of endeavor
- Distribution of licenses
- License must not be specific to a production
- License must not restrict other software
- License must be technology-neutral
Open-source software adherents take what they consider the more pragmatic view of looking more at the license requirements and put significant effort into convincing commercial enterprises of the practical benefits of open source, meaning the free availability of application source code.
In an attempt to placate both groups when discussing the same software application, the term free/open-source software (F/OSS) was developed. Since the term "free" was still tending to confuse some people, the term "libre," which connotes freedom, was added resulting in the term free/libre open-source software (FLOSS). If you perform a detailed analysis on the full specifications, you will find that all free software fits the open-source software definition, while not all open-source software fits the free software definition. However, any open-source software that is not also free software is the exception rather than the rule. As a result, you will find these acronyms used almost interchangeably, but there are subtle differences in meaning, so stay alert. In the final analysis, the software license that accompanies the software is what you legally have to follow.
The reality is that since both groups trace their history back to the same origins, the practical differences between an application being free software or open-source are generally negligible. Keep in mind that the above descriptions are to some degree generalizations, as both organizations are involved in multiple activities. There are many additional groups interested in open source for a wide variety of reasons. However, this diversity is also a strong point, resulting in a vibrant and dynamic community. You should not allow the difference in terminology to be divisive. The fact that all of these terms can be traced back to the same origin should unite us. In practice, many of the organization members will use the terms interchangeably, depending on the point that they are trying to get across. With an excess of 300,000 FLOSS applications currently registered in SourceForge.net and over 10 million repositories on GitHub, there are generally multiple options accessible for any class of application, be it a laboratory information management system (LIMS), an office suite, a data base, or a document management system. Presumably you have gone through the assessment of the various challenges to using an open-source application and have decided to move ahead with selecting an appropriate application. The difficulty now becomes selecting which application to use. While there are multiple indexes of FOSS projects, these are normally just listings of the applications with a brief description provided by the developers, with no indication of the vitality or independent evaluation of the project.
What is missing is a catalog of in-depth reviews of these applications, eliminating the need for each group to go through the process of developing a list of potential applications, screening all available applications, and performing in-depth reviews of the most promising candidates. While true that once an organization has made a tentative selection it will need to perform its own testing to confirm that the selected application meets its specific needs, there is no reason for everyone to go through the tedious process of identifying projects and weeding out the untenable ones.
The primary goal of this document is to describe a general procedure capable of being used to screen any selected class of software applications. The immediate concern is with screening FLOSS applications, though allowances can be made to the process to allow at least rough cross-comparison of both FOSS and commercial applications. To that end, we start with an examination of published survey procedures. We then combine a subset of standard software evaluation procedures with recommendations for evaluating FLOSS applications. Because it is designed to screen such a diverse range of applications, the procedure is by necessity very general. However, as we move through the steps of the procedure, we will describe how to tune the process for the class of software that you are interested in.
You can also ignore any arguments regarding selecting between FLOSS and commercial applications. In this context, "commercial" refers to the marketing approach, not to the quality of the software. Many FLOSS applications have comparable, if not superior, quality to products that are traditionally marketed and licensed. Wheeler discusses this issue in more detail, showing that by many definitions FLOSS is commercial software.
The final objective of this process is to document a procedure that can then be applied to any class of FOSS applications to determine which projects in the class are the most promising to pursue, allowing us to expend our limited resources most effectively. As the information available for evaluating FOSS projects is generally quite different from that available for commercially licensed applications, this evaluation procedure has been optimized to best take advantage of this additional information.
A search of the literature returns thousands of papers related to open-source software, but most are of limited value in regards to the scope of this project. The need for a process to assist in selecting between open-source projects is mentioned in a number of these papers, and there appear to be over a score of different published procedures. Regrettably, none of these methodologies appear to have gained large-scale support in the industry.
Stol and Babar have published a framework for comparing evaluation methods targeting open-source software and include a comparison of 20 of them. They noted that web sites that simply consisted of a suggestion list for selecting an open-source application were not included in this comparison. This selection difficulty is nothing new with FLOSS applications. In their 1994 paper, Fritz and Carter review over a dozen existing selection methodologies, covering their strengths, weaknesses, the mathematics used, as well as other factors involved.
Table 1.: Comparison frameworks and methodologies for examination of FLOSS applications extracted from Stol and Babar. The selection
procedure is described in Stol's and Barbar's paper, however, 'Year' indicates the date of publication, 'Orig.' indicates whether the described
process originated in industry (I) or research (R), while 'Method' indicates whether the paper describes a formal analysis method and procedure (Yes)
or just a list of evaluation criteria (No).
Extensive comparisons between some of these methods have also been published, such as Deprez's and Alexandre's comparative assessment of the OpenBRR and QSOS techniques. Wasserman and Pal have also published a paper under the title of "Evaluating Open Source Software," which appears to be more of an updated announcement and in-depth description of the Business Readiness Rating (BRR) framework. Jadhav and Sonar have also examined the issue of both evaluating and selecting software packages. They include a helpful analysis of the strengths and weaknesses of the various techniques. Perhaps more importantly, they clearly point out that there is no common list of evaluation criteria. While the majority of the articles they reviewed listed the criteria used, Jadhav and Sonar indicated that these criteria frequently did not include a detailed definition, which required each evaluator to use their own, sometimes conflicting, interpretation.
Since the publication of Stol and Babar's paper, additional evaluation methods have been published. Of particular interest are a series of papers by Pani et al. describing their proposed FAME (Filter, Analyze, Measure and Evaluate) methodology. In their "Transferring FAME" paper, they emphasized that all of the evaluation frameworks previously described in the published literature were frequently not easy to apply to real environments, as they were developed using an analytic research approach which incorporated a multitude of factors.
Their stated design objective with FAME is to reduce the complexity of performing the application evaluation, particularly for small organizations. They specify "[t]he goals of FAME methodology are to aid the choice of high-quality F/OSS products, with high probability to be sustainable in the long term, and to be as simple and user friendly as possible." They further state that "[t]he main idea behind FAME is that the users should evaluate which solution amongst those available is more suitable to their needs by comparing technical and economical factors, and also taking into account the total cost of individual solutions and cash outflows. It is necessary to consider the investment in its totality and not in separate parts that are independent of one another."
This paper breaks the FAME methodology into four activities:
- Identify the constraints and risks of the projects
- Identify user requirements and rank
- Identify and rank all key objectives of the project
- Generate a priority framework to allow comparison of needs and features
Their paper includes a formula for generating a score from the information collected. The evaluated system with the highest "major score," Pjtot, indicates the system selected. While it is a common practice to define an analysis process which condenses all of the information gathered into a single score, I highly caution against blindly accepting such a score. FAME, as well as a number or the other assessment methodologies, is designed for iterative use. The logical purpose of this is to allow the addition of factors initially overlooked into your assessment, as well as to change the weighting of existing factors as you reevaluate their importance. However, this feature means that it is also very easy to unconsciously, or consciously, skew the results of the evaluation to select any system you wish. Condensing everything down into a single value also strips out much of the information that you have worked so hard to gather. Note that you can generate the same result score using significantly different input values. While of value, selecting a system based on just the highest score could potentially leave you with a totally unworkable system.
Pani et al. also describe a FAMEtool to assist in this data gathering and evaluation. However, a general web search as well as a review of their FAME papers revealed no indication of how to obtain this resource. While this paper includes additional comparisons with other FLOSS analysis methodologies, and there are some hints suggesting that the FAMEtool is being provided as a web service, I have found no URL specified for it. As of now, I have received no responses from the research team via either e-mail or Skype regarding FAME, the FAMEtool, or feedback on its use.
During this same time frame Soto and Ciolkowski also published papers describing the QualOSS Open Source Assessment Model and compared it to a number of the procedures in Stol and Barbar's table. Their focus was primarily on three process perspectives: product quality, process maturity, and sustainability of the development community. Due to the lack of anything more than a rudimentary process perspective examination, they felt that the following OSS project assessment models were unsatisfactory: QSOS, CapGemni OSMM, Navica OSMM, and OpenBRR. They position QualOSS as an extension of the tralatitious CMMI and SPICE process maturity models. While there are multiple items in the second paper that are worth incorporating into an in-depth evaluation process, they do not seem suitable for what is intended as a quick survey.
Another paper, published by Haaland and Groven, also compared a number of open source quality models. To this paper's credit, the authors devoted a significant amount of space to discussing the different definitions of quality and how the target audience of a tool might affect which definition was used. Like Stohl and Babar, they listed a number of the quality assessment models to choose from, including OSMM, QSOS, OpenBRR, and others. For their comparison, they selected OpenBRR and QualOSS. They appear to have classified OpenBRR as a first generation tool with a "[us]ser view on quality" and QualOSS as a second generation tool with a "business point of view." An additional variation is that OpenBRR is primarily a manual tool, while QualOSS is primarily an automated tool. Their analysis in this article clearly demonstrates the steps involve in using these tools and in highlighting where they are objective and subjective. While they were unable to answer their original question as to whether the first- or second-generation tools did a better job of evaluation, to me they answered the following even more important but unasked question. As they proceeded through their evaluation, it became apparent as to how much the questions defined in the methods could affect the results of the evaluations. Even though the authors might have considered the questions to be objective, I could readily see how some of these questions could be interpreted in alternate ways. My takeaway is an awareness of the potential danger of using rigid tools, as they can skew the accuracy of the evaluation results depending on exactly what you want the evaluated application to do and how you plan to use it. These models can be very useful guides, but they should not be used to replace a carefully considered evaluation as there will always be factors influencing the selection decision which did not occur to anyone when the specifications were being written.
Hauge et al. have noted that despite the development of several normative methods of assessment, empirical studies have failed to show wide spread adoption of these methods. From their survey of a number of Norwegian software companies, they have noticed a tendency for selectors to skip the in-depth search for what they call the "best fit" application and fall back on what they refer to as a '"first fit." This is an iterative procedure with the knowledge gained from the failure of one set of component tests being incorporated into the evaluation of the next one. Their recommendation is for researchers to stop attempting to develop either general evaluation schemas or normative selection methods which would be applicable to any software application and instead focus on identifying situationally sensitive factors which can be used as evaluation criteria. This is a very rational approach as all situations, even if evaluating the same set of applications is going to be different, as each user's needs are different.
Ayalal et al. have performed a study to try to more accurately determine why more people don't take advantage of the various published selection methodologies. While they looked at a number of factors and identified several possible problems, one of the biggest factors was the difficulty in obtaining the needed information for the evaluation. Based on the projects they studied, many did not provide a number of the basic pieces of information required for the evaluation, or perhaps worse, required extensive examination of the project web site and documentation to retrieve the required information. From her paper, it sounds as if this issue was more of a communication breakdown than an attempt to hide any of the information, not that this had any impact on the inaccessibility of the information.
In addition to the low engagement rates for the various published evaluation methods, another concern is the viability of the sponsoring organizations. One of the assessment papers indicated that the published methods with the smallest footprint, or the easiest to use, appeared to be FAME and the OpenBRR. I have already mentioned my difficulty obtaining additional information regarding FAME, and OpenBRR appears to be even more problematic. BRR was first registered on SourceForge in September of 2005, and an extensive Request For Comments from the founding members of the BRR consortium (SpikeSource, the Center for Open Source Investigation at Carnegie Mellon West, and Intel Corporation) was released. In 2006, in contrast to typical Open Source development groups, the OpenBRR group announced the formation of an OpenBRR Corporate Community group. Peter Galli's story indicates that "the current plan is that membership will not be open to all." He quotes Murugan Pal saying "membership will be on an invitation-only basis to ensure that only trusted participants are coming into the system." However, for some reason, at least some in the group "expressed concern and unhappiness about the idea of the information discussed not being shared with the broader open-source community."
While the original Business Readiness Rating web site still exists, it is currently little more than a static web page. It appears that some of the original information posted on the site is still there, you just have to know what its URL is to access it, as the original links on the web site have been removed. Otherwise, you may have to turn to the Internet Archive to retrieve some of their documentation. The lack of any visible activity regarding OpenBRR prompted a blog post from one graduate student in 2012 asking "What happened to OpenBRR (Business Readiness Rating for Open Source)?"
It appears that at some point any development activity regarding OpenBRR was morphed into OSSpal. However, background information on this project is sparse as well. While the site briefly mentions that OSSpal incorporates a number of lessons learned from BRR, there is very little additional information regarding the group or the methods procedures. Their "All Projects" tab provides a list of over 30 open-source projects, but the majority simply show "No votes yet" under the various headings. In fact, as of now, the only projects showing any input at all are for Ubuntu and Mozilla Firefox.
Initial evaluation and selection recommendations
At this point, we'll take a step back from the evaluation methodologies papers and examine some of the more general recommendations regarding evaluating and selecting FLOSS applications. The consistency of their recommendations may provide a more useful guide for an initial survey of FLOSS applications.
In TechRepublic, de Silva recommends 10 questions to ask when selecting a FLOSS application. While he provides a brief discourse on each question in his paper to ensure you understand the point of his question, I've collected the 10 questions from his article into the following list. Once we see what overlap, if any, are amongst our general recommendations, we'll address some of the consolidated questions in more detail.
- Are the open source license terms compatible with my business requirements?
- What is the strength of the community?
- How well is the product adopted by users?
- Can I get a warranty or commercial support if I need it?
- What quality assurance processes exist?
- How good is the documentation?
- How easily can the system be customized to my exact requirements?
- How is this project governed and how easily can I influence the road map?
- Will the product scale to my enterprise's requirements?
- Are there regular security patches?
Similarly, in InfoWorld Phipps lists seven questions you should have answered before even starting to select a software package. His list of questions, pulled directly from his article are:
- Am I granted copyright permission?
- Am I free to use my chosen business model?
- Am I unlikely to suffer patent attack?
- Am I free to compete with other community members?
- Am I free to contribute my improvements?
- Am I treated as a development peer?
- Am I inclusive of all people and skills?
This list of questions shows a moderately different point of view, as it is not only just about someone selecting an open-source system, but also it's about getting involved in its direct development. Padin, of 8th Light, Inc., takes the viewpoint of a developer who might incorporate open-source software into their projects. The list of criteria pulled directly from his blog includes:
- Does it do what I need it to do?
- How much more do I need it to do?
- Easy to review source code
- Tests and specs
Metcalfe of OSS Watch lists his top tips as:
- Ongoing effort
- Standards and interoperability
- Support (Community)
- Support (Commercial)
- Version 1.0
- Skill setting
- Project Development Development Model
In his LIMSexpert blog, Joel Limardo of ForwardPhase Technologies, LLC lists the following as components to check when evaluating an open-source application:
- Check licensing
- Check code quality
- Test setup time
- Verify extensibility
- Check for separation of concerns
- Check for last updated date
- Check for dependence on outdated toolkits/frameworks
Perhaps the most referenced of the general articles on selecting FLOSS applications is David Wheeler's "How to Evaluate Open Source Software / Free Software (OSS/FS) Programs." The detailed functionality to consider will vary with the types of applications being compared, but there are a number of general features that are relevant to almost any type of application. While we will cover them in more detail later, Wheeler categorizes the features to consider as the following:
- System functionality
- System cost – direct and in-direct
- Popularity of application, i.e. its market share for that type of application
- Varieties of product support available
- Maintenance of application, i.e, is development still taking place
- Reliability of application
- Performance of application
- Scalability of application
- Usability of application
- Security of application
- Adaptability/customizability of application
- Interoperability of application
- Licensing and other legal issues
While a hurried glance might suggest a lot of diversity in the features these various resources suggest, a closer look at the meaning of what they are saying shows a repetitive series of concerns. The primary significant differences between the functionality lists suggested is actually due more to how wide a breadth of the analysis process the authors are considering, as well as the underlying features that they are concerned with.
With a few additions, the high-level screening template described in the rest of this communication is based on Wheeler's previously mentioned document describing his recommended process for evaluating open-source software and free software programs. Structuring the items thus will make it easier to locate the corresponding sections in his document, which includes many useful specific recommendations as well as a great deal of background information to help you understand the why of the topic. I highly recommend reading it and following up on some of the links he provides. I will also include evaluation suggestions from several of the previously mentioned procedures where appropriate.
Wheeler defines four basic steps to this evaluation process, as listed below:
- Identify candidate applications.
- Read existing product reviews.
- Compare attributes of these applications to your needs.
- Analyze the applications best matching your needs in more depth.
Wheeler categorizes this process with the acronym IRCA. In this paper we will be focusing on the IRC components of this process. To confirm the efficacy of this protocol we will later apply it to several classes of open-source applications and examine the output of the protocol.
Realistically, before you can perform a survey of applications to determine which ones best match your needs, you must determine what your needs actually are. The product of determining these needs is frequently referred to as the user requirements specification (URS). This document can be generated in several ways, including having all of the potential users submit a list of the functions and capabilities that they feel is important. While the requirements document can be created by a single person, it is generally best to make it a group effort with multiple reviews of the draft document, including all of the users who will be working with the application. The reason for this is to ensure that an important requirement is not missed. When a requirement is missed, it is frequently due to the requirement being so basic that it never occurs to anyone that it specifically needed to be included in the requirements document. Admittedly, a detailed URS is not required at the survey level, but it is worth having if only to identify, by their implications, other features that might be significant.
Needs will, of course, vary with the type of application you are looking for and what you are planning to do with it. Keep in mind that the URS is a living document, subject to change through this whole process. Developing a URS is generally an iterative process, since as you explore systems, you may well see features that you hadn't considered that you find desirable. This process will also be impacted by whether the application to be selected will be used in a regulated environment. If it is, there will be existing documents that describe the minimum functionality that must be present in the system. Even if it is not to be used in a regulated environment, documents exist for many types of systems that describe the recommended functional requirements that would be expected for that type of system.
For a clarifying example, if you were attempting to select a laboratory information management system (LIMS), you can download checklists and standards of typical system requirements from a variety of sources. These will provide you with examples of the questions to ask, but you will have to determine which ones are important to, or required for, your particular effort.
Depending on the use to which this application is to be applied, you may be subject to other specific regulatory requirements as well. Which regulations may vary, since the same types of analysis performed for different industries fall under different regulatory organizations. This aspect is further complicated by the fact that you may be affected by more than one country's regulations if your analysis is applicable to products being shipped to other countries. While some specific regulations may have changed since its publication, an excellent resource to orient you to the diverse factors that must be considered is Siri Segalstad's book International IT Regulations and Compliance. My understanding is that an updated version of this book is currently in preparation. Keep in mind that while regulatory requirements that you must meet will vary, these regulations by and large also describe best practices, or at least the minimal allowed practices. These requirements are not put in place arbitrarily (generally) or to make things difficult for you but to ensure the quality of the data produced. As such, any deviations should be carefully considered, whether working in a regulated environment or not. Proper due diligence would be to determine which regulations and standards would apply to your operation.
For a LIMS, an example of following best practices is to ensure that the application has a full and detailed audit trail. An audit trail allows you to follow the processing of items through your system, determining who did what and when. In any field where it might become important to identify the actions taken during a processing step, an audit trail should be mandatory. While your organization's operations may not fall under the FDA's 21 CFR Part 11 regulations, which address data access and security (including audit trails), it is still extremely prudent that the application you select complies with them. If it does not, then almost anyone could walk up to your system and modify data, either deliberately or accidentally, and you would have no idea of who made the changes or what changes they made. For that matter, you might not even be able to tell a change was made at all, which likely will raise concerns both inside and outside of your organization. This would obviously cause major problems if they became a hinge issue for any type of liability law suit.
For this screening procedure, you do not have to have a fully detailed URS, but it is expedient to have a list of your make-or-break issues. This list will be used later for comparing systems and determining which ones justify a more in-depth evaluation.
To evaluate potential applications against your functional criteria, you must initially generate a list of potential systems. While this might sound easy, generating a comprehensive list frequently proves to be a challenge. When initiating the process, you must first determine the type of system that you are looking for, be it a LIMS, a hospital management system, a database, etc. At this point, you should be fairly open in building your list of candidates. By that, I mean that you should be careful not to select applications based solely on the utilization label applied to them. The same piece of software can frequently be applied to solve multiple problems, so you should cast a wide net and not automatically reject a system because the label you were looking for hadn't been applied to it. While the label may give you a convenient place to start searching, it is much more important to look for the functionality that you need, not what the system is called. In any case, many times the applied labels are vague and mean very different things to different people.
There are a variety of ways to generate your candidate list. A good place to start is simply talking with colleagues in your field. Have they heard of or used a FLOSS application of the appropriate type that they like? Another way is to just flip through journals and trade magazines that cover your field. Any sufficiently promising applications are likely to be mentioned there. Many of the trade magazines will have a special annual issue that covers equipment and software applicable to their field. It is difficult to generate a list of all potential resources, as many of these trade publications are little-known outside of their field. Also keep in mind that with the continued evolution of the World Wide Web, many of these trade publications also have associated web sites that you can scan or search. The table below includes just a minor fraction of these sites that are available. (We would welcome the suggestion of any additional resource sites that you are aware of. Please e-mail the fields covered, the resource name, and either its general URL or the URL of the specific resource section to the corresponding editor.)
Table 2.: Examples of focused FLOSS resource sites available on the web
I also recommend checking some of the general open source project lists, such as the ones generated by Cynthia Harvey at Datamation, which has been covering the computer and data-processing industry since 1957. In particular, you might find their article "Open Source Software List: 2015 Ultimate List" useful. It itemizes over 1,200 open source applications, including some in categories that I didn't even know existed.
It would also be prudent to search the major open source repositories such as SourceForge and GitHub. Wikipedia includes a comparison of source code hosting facilities that would be worth reviewing as well. Keep in mind that you will need to be flexible with your search terms, as the developers might be looking at the application differently than you are. While they were created for a different purpose, an examination of the books in The Architecture of Open Source Applications might prove useful as well. Other sites where you might find interesting information regarding new open-source applications, are the various OpenSource award sites, such as the InfoWorld Best of Open Source Software Awards, colloquially known as the Bossies.
When searching the web, don't rely on just Google or Bing. Don't forget to checkout all of the journal web sites such as SpringerLink, Wiley, ScienceDirect, PubMed, and others as they contain a surprising amount of information on FLOSS. If you don't wish to search each of them individually, there are other search engines out there which can give you an alternate view of the research resources available. To name just two, be sure to try both Google Scholar and Microsoft Academic Search. These tools can also be used to search for masters theses and doctoral dissertations, which likewise contain a significant amount of information regarding open source.
While working on creating your candidate list, be sure to pull any application reviews that you come across. If done well, these reviews can save you a significant amount of time in screening a potential system. However, unless you're familiar with the credentials of the author, be cautious of relying on them for all of your information. While not common, people have been known to post fake reviews online, sometimes when it is not even April 1! Another great resource, both for identifying projects and obtaining information about them, is Open HUB, a web site dedicated to open source and maintained by Black Duck Software, Inc. Open Hub allows you to search by project, person, organization, forums, and code. For example, if I searched for Bika LIMS, it would currently return the results for Bika Open Source LIMS 3 along with some basic information regarding the system. If I were to click on the project's name, a much more detailed page regarding this project is displayed. Moving your mouse cursor over the graphs displays the corresponding information for that date.
Once a list of candidate applications has been generated, the list of entries must be compared. Some of this comparison can be performed objectively, but it also requires subjective analysis of some components. As Stol and Babar have shown, there is no single recognized procedure for either the survey or detailed comparison of FLOSS applications that has shown a marked popularity above the others.
The importance of any one specific aspect of the evaluation will vary with the needs of the organization. General system functionality will be an important consideration, but specific aspects of the contained functionality will have different values to different groups. For instance, interoperability may be very important to some groups, while others may be using this application as their only data system and they have no interest in exchanging data files with others, so interoperability is not a concern to them. While you can develop a weighting system for different aspects of the system, this can easily skew selections, resulting in a system that has a very good rating yet is unable to perform the required function. Keep in mind that though this is a high-level survey, we are asking broad critical questions, not attempting to compare detailed minutia. Also keep in mind that a particular requirement might potentially fall under multiple headings. For example, compliance with 21 CFR Part 11 regulations might be included under functionality or security.
While in-depth analysis of the screened systems will require a more detailed examination and comparison, for the purpose of this initial survey a much simpler assessment protocol will serve. While there is no single "correct" evaluation protocol, something in the nature of the three-leaf scoring criteria described for QSOS should be suitable. Keep in mind that for this quick assessment we are using broad criteria, so both the criteria and the scoring will both be more ambiguous than that required for an in-depth assessment. Do not be afraid to split any of these criteria up into multiple finer classes of criteria if the survey requires it. This need would be most likely appear under "system functionality," as that is where most people's requirements greatly diverge.
In this process, we will assign one of three numeric values to each of the listed criteria. A score of zero indicates that the system does not meet the specified criteria. A score of one indicates that the system marginally meets the specified criteria. You can look on this as the feature is present and workable, but not to the degree you'd like it. Finally, a score of two indicates that the system fully meets or exceeds the specified requirement. In the sections below I will list some possible criteria for this table. However, you can adjust these descriptions or add a weighting factor, as many other protocols do, to adjust for the criticality of a given requirement.
Realistically, when it comes down to some of your potential evaluation criteria, the actuality is that for some of them, you can compensate for the missing factor in some other way. For other criteria, their presence or absence can be a drop-dead issue. That is, if the particular criteria or feature isn't present, then it doesn't matter how well any of the other criteria are ranked: that particular system is out of consideration. Deciding which, if any, criteria are drop-dead items should ideally be determined before you start your survey. This will not only be more efficient, in that it will allow you to cut off data collection at any failure point, but it will also help dampen the psychological temptation to fudge your criteria, retroactively deciding that a given criteria was not that important after all.
At this stage we are just wanting to reduce the number of systems for in-depth evaluation from potentially dozens, to perhaps three or four. As such, we will be refining our review criteria later, so if something isn't really a drop-dead criteria, don't mark it so. It's amazing the variety of feature tradeoffs people tend to make further down the line.
While functionality is a key aspect of selecting a system, its assessment must be used with care. Depending on how a system is designed, a key function that you are looking for might not have been implemented, but in one system it can easily be added, while in another it would take a complete redesign of the application. Also consider the possibility of whether this function must be intrinsic to the application or if you can pair the application being evaluated with another application to cover the gap.
In most cases, you can obtain much of the functionality information from the project's web site or, occasionally, web sites. Some projects have multiple web sites, usually with one focused on project development and another targeting general users or application support. There are two different types of functionality to be tested. The first might be termed "general functionality" that would apply to almost any system. Examples of this could include the following:
- User authentication
- Audit trail (I'm big on detailed audit trails, as they can make the difference between between being in regulatory compliance and having a plant shut down. Even if you aren't required to have them, they are generally a good idea, as the records they maintain may be the only way for you to identify and correct a problem.)
- Sufficient system status display that the user can understand the state of the system
- Ability to store data in a secure fashion
We might term the second as "application functionality." This is functionality specifically required to be able to perform your job. As the subject matter expert, you will be the one to create this list. Items might be as diverse as the following:
For a laboratory information management system...
- Can it print bar coded labels?
- Can it track the location of samples through the laboratory, as well as maintain the chain-of-custody for the sample?
- Can it track the certification of analysts for different types of equipment, as well as monitor the preventive maintenance on the instruments?
- Can it generate and track aliquots of the original sample?
- What range of map projections is supported?
- Does the system allow you to create custom parameters?
- Does it allow import of alternate system formats?
- Can it directly interface with geographic positioning devices (GPS)?
For a library management software system (LMSS) (or integrated library system [ILS], if you prefer, or even library information management system [LIMS]; have you ever noticed how scientists love to reuse acronyms, even within the same field?)...
- Can you identify the location of any item in the collection at a given time?
- Can you identify any items that were sent out for repair and when?
- Can it identify between multiple copies of an item and between same named items on different types of media?
- Can it handle input from RFID tags?
- Can it handle and differentiate different clients residing at the same address?
- If needed, can it correlate the clients age with the particular item they are requesting, in case you have to deal with any type of age appropriate restrictions?
- If so, can it be overridden where necessary (maintaining the appropriate records in the audit trail as to why the rule was overridden)?
For an archival record manager (This classification can cover a lot of ground due to all of the different ways that "archival" and "record" are interpreted.)...
- In some operations, a record can be any information about a sample, including the sample itself. By regulation, some types of information must be maintained essentially for ever. In others, you might have to keep information for five years, while you have to maintain data for another type for 50 years.
- Can the application handle tracking records for different amounts of time?
- Does the system automatically delete these records at the end of the retention period or does it ask for confirmation from a person?
- Can overrides on a particular sample be applied so that records are not allowed to be deleted, either manually of because they are past their holding date, such as those that might be related to any litigation, while again maintaining all information about the override and who applied the override in the audit trail?
- In other operations, an archival record manager may actually refer to the management of archival records, be these business plans, architectural plans, memos, art work, etc.
- Does the system keep track of the type of each record?
- Does the system support appropriate meta data on different types of records?
- Does it just record the items location and who is responsible for it, such as a work of art?
- If a document, does it just maintain an electronic representation of a document, such as a PDF file, or does it record the location of the original physical document, or can it do both?
- Can it manage both permanently archived items, such as a work of art or a historically significant document, and more transitory items, where your record retention rules say to save it for five, 10, 15 years, etc., and then destroy it?
- In the latter case, does the system require human approval before destroying any electronic documents or flagging a physical item for disposal? Does it require a single human sign-off or must a chain of people sign-off on it, just to confirm that it is something to be discarded by business rules and not an attempt to hide anything?
- This is a challenging quagmire, with frequently changing regulations and requirements. Depending on how you want to break it down, this heading can be segmented into two classes: electronic medical records (EMR) which can constitute an electronic version of the tracking of the patients health and electronic health records (EHR) which contains extensive information on the patient, including test results, diagnostic information, and other observations by the medical staff. For those who want to get picky, you can also subdivide the heading into imaging systems, such as X-rays and CAT scans and other specialized systems.
- Can all records be accessed quickly and completely under emergency situations?
- What functionality is in place to minimize the risk of a Health Insurance Portability and Accountability Act (HIPAA) violation?
- What functionality exists for automated data transfer from instruments or laboratory data systems to minimize transcription errors.
- How are these records integrated with any billing or other medical practice system?
- If integrated with the EMR and EHR record systems, does this application apply granular control over who can access these records and what information they are able to see?
For an enterprise resource planning (ERP) system...
- Davis has indicated that a generally accepted definition of an ERP system is a "complex, modular software integrated across the company with capabilities for most functions in the organization." I believe this translates as "good luck," considering the complexity of the systems an ERP is designed to model and all of the functional requirements that go into that. It is perhaps for that reason that successful ERP implementations generally take several years.
- ERP systems generally must be integrated with other informatics systems within the organization. What types of interfaces does this system support?
- Is their definition of plug-and-play that you just have to configure the addresses and fields to exchange?
- Is their definition of interface that the system can read in information in a specified format and export the same, leaving you to write the middle ware program to translate the formats between the two systems?
- ERP systems generally must be integrated with other informatics systems within the organization. What types of interfaces does this system support?
From the above, it is easy to see why researchers have encountered difficulty in developing a fixed method that can be used to evaluate anything. At this point, it is quite acceptable to group similar functions together — as this is a high-level survey to identify which systems will definitely not be suitable — so we can focus our researches on those that might be. Researchers such as Sarrab and Rehman summarize system functionality as "achieving the user's requirements, correct output as user's expectations and verify that the software functions appropriately as needed."
Suggested ratings are:
- Zero – Application does not support required functionality.
- One – Application supports the majority of functionality to at least an useable extent.
- Two – Application meets or exceeds all functional requirements.
Many of the researchers I've encountered have indicated that community is the most critical factor of a FLOSS project. There are a number of reasons for this. First, the health and sustainability of a FLOSS project is indicated by a large and diverse community that is both active and responsive. Additionally, the core programmers in a FLOSS project are generally few, and it is the size of the project community that determines how well-reviewed the application is, ensuring quality control of the projects code. Finally, the size of the project community correlates with the lifetime of the project.
Suggested ratings are:
- Zero – No community exists. No development activity is observable. Project is dead.
- One - Community is small and perhaps insular. May consist of just one or two programmers with perhaps a small number of satellite users.
- Two – Community is large and dynamic, with many contributors and active communication between the core developers and the rest of the community. Community is responsive to outside inquiries.
Since the main goal of this survey procedure is to evaluate FLOSS products, the base system cost for the software will normally be low, frequently $0.00. However, this is not the only cost you need to consider. Many of these costs could potentially be placed under multiple headings depending on how your organization is structured. No matter how it itemizes them, there will be additional costs. Typical items to consider include the following:
- Cost of supporting software – e.g. Does it require a commercial data base such as Oracle or some other specialized commercial software component?
- Cost of additional hardware – e.g. Does the system require the purchase of additional servers or storage systems? Custom hardware interfaces?
- Cost of training – e.g. How difficult or intuitive is the system to operate? This will impact the cost of training that users must receive. Keep in mind this cost will exist whether you are dealing with an Open Source or proprietary system. Are costs for system manuals and other required training material included? Some proprietary systems don't, or might perhaps send a single hard copy of the manuals. Who will perform the training? Whether you hire someone from outside or have some of your own people do it, there will be a cost, as you would be pulling people away from from their regular jobs.
- Cost of support – e.g. Is support through a commercial organization or the Open Source development group? If the former, what are their contract costs? While harder to evaluate, what is the turnaround time from when you request the support? Immediate? Days? Sometime? The amount of time you have to wait for a problem to be fixed, is definitely a "cost," whether it means your system is dead in the water or just not as efficient and productive as it could be.
The primary issue here is to be realistic in your evaluation. It's hard to believe that anyone would assume that there were no associated costs with using FLOSS, or even proprietary software for that matter, but apparently there are. Foote does a good job of exploring and disproving this belief, showing all of the items that go into figuring the total cost of ownership (TCO), which should be representative of FLOSS applications. I wouldn't call them hidden costs, at least not with the FLOSS systems, but rather costs that are overlooked, as are so many other things when people focus on a single central item. To quote Robert Heinlein, "TANSTAAFL!" (There ain't no such thing as a free lunch!)
The important thing here is to pay attention to all of the interactions taking place. For example, if you wished to interface a piece of equipment to your FLOSS application, remember to factor in the cost of the interface. Despite what some advertisers think, data bits don't just disappear from one place and magically appear in another. It is very easy to lose track of where the costs are. Keep in mind that many items, such as training, you will have a cost either way you go. If a proprietary vendor says they will provide free training, you can be assured that the cost for it is included in the contract. But if you take that route, be very careful to read the contract thoroughly, as not all vendors include any training at all. It would be very easy to end up having to pay for "optional" training.
Suggested ratings are:
- Zero – Installation and support costs are excessive and greatly exceed any available budget.
- One – System may require purchase of additional hardware or customization. These costs, along with training costs, are within potential budget.
- Two – Installation and support costs are relatively minor, with no additional hardware required. System design is relatively intuitive with in-depth documentation and active support from the community.
This heading can be somewhat confusing in terms of how it is interpreted even though most of the recommendations we looked at include it. Popularity is sometimes considered to be similar to market share. That is, of the number of people using a specific application in a given class of open-source applications, what percentage do they represent out of all people running applications in the class? If the majority of people are using a single system, this might indicate that it is the better system, or it might just indicate that other systems are newer and, even if potentially better, people haven't migrated over to them yet. An alternate approach to examining it is to ask how many times the application has been downloaded. In general, the larger the market share or the number of downloads, the more likely that a given product is to be usable. This is not an absolute, as people may have downloaded the application for testing and then rejected it or downloaded it simply to game the system, but it is a place to start. The point of this question usually isn't to determine how popular a particular application is, but rather to ensure that it is being used and it is a living (as opposed to an abandoned) project. Be leery of those applications with just a few downloads. If there is a large group of people using the application, there is a higher probability that the application works as claimed.
At the same time, learning who some of the other users of this application are can give you some insight of how well it actually works. As Silva reminds us, "the best insight you can get into a product is from another user who has been using it for a while."
Suggested ratings are:
- Zero – No other discernible users or reality of listed users is questionable.
- One – Application is being used by multiple groups, but represents only a small fraction of its "market." Appears to be little 'buzz' about the application.
- Two – Application appears to be widely used and represents a significant fraction of its "market." This rating is enhanced if listed users include major commercial organizations.
Product support can be a critically important topic for any application. Whether you are selecting a proprietary application or a FLOSS one, it is vitally important to ensure that you will have reliable support available. Just because you purchased a proprietary program will not ensure that you have the support you need. Some vendors include at least limited support in their contracts, others don't. However, over the years I've found that even purchasing a separate support contract doesn't ensure that the people who answer the phone will be able to help you. When making the final decision, don't make assumptions: research!
Support can be broken down into several different sub-categories:
- User manuals – Do they exist? What is their quality?
- System managers manuals – Do they exist? What is their quality?
- Application developers manuals – Do they exist? What is their quality?
- System design manuals – Do they exist? What degree of detail do they provide?
- For database-related projects, is an accurate and detailed entity relationship diagram (ERD) included?
- Have any third-party books been written about this application? Are they readily available, readable, and easy to interpret?
- Is product support provided directly by the application development community?
- Is product support provided by an independent user group, separate from the development group?
- Is commercial product support available?
- If so, what are their rates?
- Are on-site classes available?
- Are online training classes available?
- A frequently overlooked type of product support is how well documented the program code is. Are embedded comments sparse or frequent and meaningful? Are the names of program variables and functions arbitrary or meaningful?
Another factor in evaluating product support is whether you have anyone on your team with the expertise to understand it. That is not a derogatory statement: depending on the issue, someone might have to modify an associated data base, the application code, or the code in a required library, whether to correct an error or add functionality. Do any of your people have expertise in that language? Would someone have to start from scratch or do you have the budget to hire an outside consultant? Even if you have no desire to modify the code, having someone on the project that understands the language used can be a big help in discerning how the program works, as well as determining how meaningful the program comments are.
Suggested ratings are:
- Zero – Limited or no support available. Documentation essentially non-existent, source code minimally documented, no user group support, and erratic response from the developers.
- One – Documentation scattered and of poor quality. Support from user group discouraged and no commercial support options exist.
- Two - Excellent documentation, including user, system manager, and developer documentation. Enthusiastic support from the user community and developers. Commercial third-party support available for those desiring it. Third-party books may also have been released documenting the use of this product.
This is another item that can be interpreted in multiple ways. One way to look at it is how quickly the developers respond to any bug report. Depending on the particular FLOSS project, you may actually be able to review the problem logs for the system and see what the average response time was between a bug being reported and the problem resolved. In some cases this might be hours or days, in others it is never resolved. To be fair, don't base your decision on a single instance in the log file, as some bugs are much easier to find and fix than others. However, a constant stream of open bugs or bugs that have only been closed after months or years should make you leery.
To others this question is to determine whether development is still taking place on the project or if it is dead. Alternately, it is like asking if anybody is maintaining the system and correcting bugs when they are discovered. There are several ways of addressing this issue. Examining the problem logs described above is one way of checking for project activity, while another is the check the release dates for different versions of the application. Are releases random or on a temporal schedule? How long has it been since the application was last updated? If the last release date was over a year or two ago, this is cause for concern and should trigger a closer look. Just because there hasn't been a recent release does not mean that the project has been abandoned. If the development of the app has advanced to the point where it is stable and no other changes need to be made you may not see a recent release because none is needed. However, the latter is very rare, both because bugs can be so insidious and because a lot of programmers can't resist just tweaking things, to make them a wee bit better. If a project is inactive, but everything else regarding the project looks good, it might be possible to work with the developers to revitalize it. While this course of action is feasible, it is important to realize that it is taking on a great deal of responsibility and an unknown amount of expense. The latter is particularly true as you may be having to assign one or more developers to work on the project full time.
Suggested ratings are:
- Zero – No releases, change log activity, or active development discussion in message forums in over two years.
- One – No releases, change log activity or active development discussions in message forums for between one and two years.
- Two – A new version has been released within the year, change logs show recent development activity, and there is active development discussion in the message forums.
Reliability is the degree to which you can rely on the application to function properly. Of course, the exact definition becomes somewhat more involved. The reliability of a system is defined as the ability of an application to operate properly under a specified set of conditions for a specified period of time. Fleming states that "[o]ne aspect of this characteristic is fault tolerance that is the ability of a system to withstand component failure. For example if the network goes down for 20 seconds then comes back the system should be able to recover and continue functioning."
Because of its nature, the reliability of a system is hard to measure, as you are basically waiting to see how frequently it goes down. While we'd like to aim for never, one should probably be satisfied if the system recovered properly after the failure. In most instances, unless you are actually testing a system under load, the best that you can hope for is to observe indicators from which you can infer its reliability. As a generality, the more mature a given code base is, the more reliable it is, but keep in mind that this is a generality; there are always incidents that can occur to destabilize every thing. Unfortunately, it frequently feels as if the problem turns out to be something that you would swear was totally unrelated. Face it, Murphy is just cleverer than you.
Wheeler also reminds us that "[p]roblem reports are not necessarily a sign of poor reliability - people often complain about highly reliable programs, because their high reliability often leads both customers and engineers to extremely high expectations." One thing that can be very reassuring is to see that the community takes reliability seriously by continually testing the system during development.
Suggested ratings are:
- Zero – Error or bug tracking logs show a high incident of serious system problems. Perhaps worse, no logs of reported problems are kept at all, particularly for systems that have been in release for less than a year.
- One – Error logs show relatively few repeating or serious problems, particularly if these entries correlate with entries in the change logs indicating that a particular problem has been corrected. System has been in release for over a year.
- Two - Error logs are maintained, but show relatively few bug reports, with the majority of them being minor. A version of the system, using the same code base, has been in release for over two years. Developers both distribute and run a test suite to confirm proper system operation.
Performance of an application is always a concern. Depending on what the application is trying to do and how the developers coded the functions, you may encounter a program that works perfectly but is just too unresponsive to use. Sometimes this is a matter of hardware, other times it is just inefficient coding, such as making sequential calls to a database to return part of a block of data rather than making a single call to return all of the block at once. Performance and scalability are usually closely linked.
You might be able to obtain some information on the system's actual performance from the project web site, but it is hard to tell if this is for representative or selected data. Reviewing the project mailing list may provide a more accurate indication of the system's performance or any performance problems encountered. Testing the system under your working conditions is the only way to make certain what the system's actual performance is. Unfortunately, the steps involved in setting up such a test system require much more effort than a high-level survey will allow. If any user reviews exist, they may give an insight into the system's performance. Locating other users through the project message board might be a very useful resource as well, particularly if they handle the same projected work loads that you are expecting.
It is difficult to define performance ratings without having knowledge of what the application is supposed to do. However, for systems that interact with a human operator, the time lapse between when a function is initiated and when the system responds can be suggestive. If the project maintains a test suite, particularly one containing sample data, reviewing its processing time can give an insight to the system's performance as well. Response delays of even a few seconds in frequently executed functions will not only kill the overall process performance but also result in users resistive to using the system.
Suggested ratings are:
- Zero – A system designed to be interactive fails to respond in an acceptable time frame. For many types of applications it is reasonable to expect an almost instantaneous response, particularly for screen management functions. It is not reasonable for a system to take over a minute, or even five seconds to switch screens or acknowledge an input, particularly in regards to frequently executed functions such as results entry, modification, or review.. A system that batch processes data maxes out under data loads below that of your current system.
- One – A system designed to be interactive appears to lag behind human entry for peripheral functions, but frequently accessed functions, such as results entry, modification, or review appear to respond almost instantaneously. A system that batch processes data maxes out under your existing data loads.
- Two – System is highly responsive, showing no annoying delayed responses. A system that batch processes data can process several times your current data load before maxing out.
Scalability ensures that the application will operate over the data scale range you will be working with. In general, it means that if you test the system functionality with a low data load, the application will "scale up" to handle larger data loads. This may be handled by expanding from a single processor to a larger cluster or parallel processing system. Note that for a given application, throwing more hardware at it may not resolve the problem, as the application needs to be designed to take advantage of that additional hardware. Another caveat is to carefully examine the flow of data through your system. The processor is not the only place you can encounter roadblocks limiting scalability. Other possibilities include how quickly the system can access the needed data. If the system is processing the data faster than it can access it, adding more computer power will not resolve the problem. The limiting issues might be the bandwidth of your communication lines, the access speed of the devices that the data is stored on, or contention for needed resources with other applications. As with many aspects of selecting a system and getting it up and running, making assumptions is the real killer.
Silva indicates that many open-source applications are built on the LAMP (Linux, Apache, MySQL, PHP/Perl/Python) technology stack and that this is one of the most scalable configurations available. However, you should ensure that there is evidence that the application has been successfully tested that way; the performance survey and test phases are never a good time to start making assumptions. A look at the application's user base will likely identify someone who can provide this feedback.
Suggested ratings are:
- Zero – System does not support scaling, whether due to application design or restriction of critical resources, such as rate of data access.
- One – System supports limited scaling, but overhead or resource contention, such as a data bottleneck, results in a quick performance fall off.
- Two - System is balanced and scales well, supporting large processing clusters or cloud operations without any restrictive resource pinch-points.
Usability means pretty much what it says. The concern here is not how well the program works but rather how easy is it to learn and use. The interface should be clear, intuitive, and help guide the user through the programs operation. Despite Steve Jobs, there is a limit to how intuitive an interface can be, thus the operation of the interface should be clearly documented in the user manual. Ideally the system will support a good context-sensitive help system as well. The best help systems may also provide multimedia support so that the system can actually show you how something should be done, rather than trying to tell you. I've found that frequently a good video can be worth well more than a thousand words! No matter how much time is spent writing the text for a manual or help system, it will always be unclear to somebody, if only because of the diversity of the backgrounds of people using it.
The interface between the operator and the computer may vary with the purpose of the application. While with new applications you are more likely to encounter a graphic user interface (GUI), there are still instances where you may encounter a command-line interface. Both types of interfaces have their advantages, and there are many times when something is actually easier to do with a command-line interface. The important thing to remember is that it is your interface with the system. It should be easy to submit commands to the system and interpret its response without having to hunt through a lot of extraneous information. This is normally best done by keeping the interface as clean and uncluttered as possible. As Abran et al. have pointed out, the usability of a given interface varies with "the nature of the user, the task and the environment."
If you would prefer a somewhat drier set of definitions, Abran et al. also extracted the definitions of usability from a variety of ISO standards, and they are included in the following table:
Table 3.: ISO Usability Definitions
Fleming translates this into a somewhat more colloquial statement: “Usability only exists with regard to functionality and refers to the ease of use for a given function.” For those interested in learning more about the usability debate, I suggest that you check out Andreasen et al. and especially Saxena and Dubey.
Suggested ratings are:
- Zero – System is difficult to use, frequently requiring switching between multiple screens or menus to perform a simple function. Operation of system discourages use and can actively antagonize users.
- One – System is useable but relatively unintuitive regarding how to perform a function. Both control and output displays tend to be cluttered, increasing the effort required to operate the system and interpret its output.
- Two – System is relatively intuitive and designed to help guide the user through its operation. Ideally, this is complemented with a context-sensitive help system to minimize any uncertainties in operation, particularly for any rarely used functions.
This heading overlaps with functionality and is usually difficult to assess from a high-level evaluation. While it is unlikely that you will observe any obvious security issues during a survey of this type, there are indicators that can provide a hint as to how much the applications designers and developers were concerned with security.
The simplest approach is to simply take a look at whether they've done anything that shows a concern for possible security vulnerabilities. The following are a few potential indicators that you can look for, but just being observant when seeing a demonstration can also tell you a lot.
- Is there any mention of security in the systems documentation? Does it describe any potential holes that you need to guard against or configuration changes you might make to your system environment to reduce any risks.
- Do the manuals describe any type of procedure for reporting bugs or observed system issues?
- If you have access to the developers, discuss any existing process for reporting and tracking security issues.
- Check their error logs and see if any security related issues are listed. If they are, what was the turnaround time to have them repaired, or were they repaired?
- If the developers are security-conscious, they will almost certainly want to prove to whomever received the program that it hadn't been modified by a third party. The basic way of doing this is by separately sending you what is known as an MD5 hash. This is a distinctive number generated by another program from your applications code. If you generate a new MD5 hash from the code you receive, these numbers should match. If they don't match, that means that something in the code has been altered. For developers with more concern, they might generate a cryptographic signature incorporating the code. This will tell you who sent the code as well as indicate whether the program was altered.
- Depending on the type of program, does it allow a 21 CFR Part 11-compliant implementation or conform to a similar standard?
- Depending on the type of application, does it include a detailed audit trail and security logs?
Suggested ratings are:
- Zero – System shows no concern with security or operator tracking. Anyone can walk up to it and execute a function without having to log in. System doesn't support even a minimal audit trail. Any intermediate files are easily accessible and modifiable outside of the system.
- One - System shows some attempt at user control but supports only a minimal audit trail. It may support a user table, but it fails to follow best practices by allowing user records to be deleted. Audit trail is modifiable by power users.
- Two – Maintaining system security is emphasized in the user documentation. A detailed audit trail is maintained that logs all system changes and user activities. Application is distributed along with an MD5 hash or incorporated into an electronic signature by the developer.
The goal of this topic is to identify how easily the functionality of this application can be altered or how capable it is of handling situations outside of its design parameters. Systems are generally designed to be either configurable or customizable, sometimes with a combination of both.
- Configurability - This refers to how much or how easily the functionality of the system can be altered by changing configuration settings. Configurable changes do not require any changes to the application code and generally simplify future application upgrades.
- Customizability - This refers to whether the functionality of the system must be altered by modifying the applications code. As we are targeting open-source systems, the initial assumption might be that they are all customizable; however, this can be affected by the type of license that the application is released under. More practically, how easily an application can be customized depends on how well it is designed and documented. While in theory you might be able to customize a system, if it is a mass of spaghetti code and poorly compartmentalized, it might be a nightmare to do. In any case, if you customize the system code, you may not be able to take advantage of any system upgrades without having to recreate the customizations in them.
- Extendability – While you won't find this term in most definitions, it is a hybrid system that is both configurable and customizable. It is normally configured using the same approaches as a standard configurable system. However, the ability to be upgraded remains by feeding any code customizations through an aApplication program interface (API). As long as this API is maintained between upgrades, any extension modules should continue to work.
In addition, a well designed application is usually modular, which makes program changes easier. In an ideal world, any application that you may have to customize will be specifically designed to make customization simple. There are a variety of ways of doing this. Perhaps the easiest, for an application that is designed to be modular, would be to support optional software plug-in modules that added extra functionality. Unless these were "off-the-shelf" modules, you would need to confirm that there was appropriate documentation regarding their design and use. This would most likely be done via an API, as discussed above. Depending on the system, you could transfer data through the API or have one system control another, the caveat again being that you need to have thorough documentation of the API and its capabilities.
In the majority of situations, I strongly encourage you to stick with a configurable system, assuming you can find one that meets your needs. Customizing a system is rarely justified unless you are working situationally. While almost everyone feels that their needs are unique, the reality is that a well-designed configurable system can generally meet your needs.
Suggested ratings are:
- Zero – Application does not support configuration and shows evidence of being difficult to customize, usually indicated by use of spaghetti code rather than modular design, poorly named variables and functions, along with cryptic or no embedded comments. In a worst-case situations, the source code has been deliberately obfuscated to make the system even less customizable.
- One – Application supports minor configuration capabilities or is moderately difficult to modify. The latter might be due to minimal application documentation or poor programming practices, but not deliberate obfuscation.
- Two – Application is highly configurable and accompanied by detailed documentation guiding the user through its configuration. Code is clearly documented and commented. It also follows good programming practices with highly modularized functionality, simplifying customization of the programs source code, ideally via an API.
Determine whether this software will work with the rest of the systems that you plan to use. Exactly what to check for is up to you, as you are the only one who has any idea what you will be doing with it. The following is a list of possible items that might conceivably fall under this heading:
- Does it understand the data and control protocols to talk to and control external equipment such as a drill press, telescope, or sewing machine (as appropriate)?
- Is it designed to conform to both electronic and physical standards to avoid being locked into a single supplier?
- Does it handle localization to avoid conflicts with local systems?
Suggested ratings are:
- Zero – System provides no support for integration with other applications. File formats and communication protocols used are not documented.
- One – System is not optimized for either interoperability or integration with other systems. However, it does use standard protocols so that other applications can interpret its activities. Application likely does not include an API or any existing API is undocumented.
- Two – Application is optimized for interoperability and integration with other systems. All interfaces and protocols, particularly for any existing API, are clearly documented and accompanied with sample code.
This section refers to the type of license that the application was released under and the associated legal and functional implications. With proprietary software, many people never bother to read the software license either because they don't care or think they have no choice but to accept them. Be that as it may, when selecting a FLOSS application, it is wise to take the time to read the accompanying license: it can make a big difference in what you can do with the software. First, if the software has no license, legally you have no right to even download the software, let alone run it.
While you have no control over which license the application was released under, you definitely control whether you wish to use it under the terms of the license. Which types of licenses are acceptable strongly depends on what you plan to do with the application. Do you intend to use the application as is or do you plan to modify it? If the latter, what do you plan to do with the modified code? Do you want to integrate this FLOSS application with another, either proprietary or open-source? Does this license clash with theirs? Your right to do any of these things is controlled by the license, so it must be considered very carefully, both in the light of what you want to do now and what you might want to do in the future.
One of the first things to do is to confirm that the application is even open-source; just being able to see the source code is insufficient. To qualify as open-source the license must comply with the 10 points listed in the Open Source Definition maintained by the Open Source Initiative (OSI). Pulling just the headers, their web site lists these as the required criteria:
- Free redistribution
- Source code
- Derived works
- Integrity of the author's source code
- No discrimination against persons or groups
- No discrimination against fields of endeavor
- Distribution of licenses
- License must not be specific to a product
- License must not restrict other software
- License must be technology-neutral
While somewhat cryptic to look at cold, each of the headings is associated with a longer definition, which primarily boils down to the freedom to use, modify, and redistribute the software. For those wanting to know the justification for each item, there is also an annotated version of this definition. At present, OSI recognizes 71 distinct Open Source licenses, not counting the WTFPL.
One of the functions of the OSI is to review prospective licenses to determine whether they meet these criteria and are indeed open-source. The OSI web site maintains a list of popular licenses, along with links to all approved licenses, sorted by name or category. These lists include full copies of the licenses. While clearer than most legal documents, they can still be somewhat confusing, particularly if you are trying to select one. If the definitions seem to blur, you might want to check out the Software Licenses Explained in Plain English web page maintained by TL;DRLegal. As long as the license is classified as an open-source license and you aren't planning to modify it yourself or integrate it into other systems, you probably won't have any problems. However, if you have any uncertainty at all, it might be worth making the investment to discuss the license with an intellectual property lawyer who is familiar with OSS/FS before you inadvertently commit your organization to terms that conflict with their plans.
If your plans include the possible creation of complementary software, I suggest a quick read of Wheeler's essay on selecting a license. The potential problem here is that most open-source and proprietary applications contain multiple libraries or sub-applications, each with their own license. Depending on which licenses the original developers used, they may be compatible with other applications you wish to use, or they may be incompatible. The following figure illustrates some of these complications:
Just to be absolutely clear, the license of FLOSS and proprietary applications generally disavows any type of warranty that the program will work and disclaims any liability for any damage or injuries that result from the programs use. Now, having said that, you may be able to purchase a warranty separately; just don't anticipate any legal recourse in the event of a system failure.
Suggested ratings are:
- Zero – Application does not include a license or license terms are unacceptable or incompatible with those of other applications being used.
- One – Application includes a license, but it contains potential conflicts with other licenses or allowable use that will need to be carefully reviewed.
- Two – License is fully open, allowing you to freely use the software.
Completing the evaluation
Other than independent reviews, one of the best ways to obtain some of the above information is direct testing of a system. Usually that is impractical in a survey situation because of the time it would take to install and configure an instance of the application. However, a new factor has entered the picture which may change this. Docker is a new service that combines an application, along with all of its dependencies, into a single distributable package, which they call a container. Because everything required is in the container, it is guaranteed to run the same on any system. Docker specifically states that their containers "are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure." Using only a fraction of the resources that a virtual machine (VM) would require, you can easily run multiple containers on a laptop and switch back and forth while testing.
While the Docker web site contains repositories for a number of different containers, there are multiple web sites that also host containers configured with a variety of applications. If you can locate one that contains the application you are surveying, this makes it a simple matter to try it out. Probably the easiest way to check is to run a web search containing the terms "docker", "container", and the name of the application that you are seeking.
While you can perform this application survey in many ways, to keep your defined scoring criteria in front of the evaluators and to simplify scoring the survey, it might be prudent to generate a document such as the following table with columns representing the criteria being evaluated, your rating definitions, and the numeric rating that your evaluators actually assign to the system. If you have decided to take the approach of using weighting factors instead of adjusting your definitions, you will also need to include columns for your weighting factors and a column containing the results of the evaluation after applying the weighting factor.
To demonstrate how this survey procedure is intended to be used, this section will apply the defined protocol to a block of open-source LIMS applications suitable for use in a chemical laboratory. Note that in this instance, we are using LIMS to refer to a laboratory information management system, not a labor information management system, a legislative information management system, or any other permutation that matches the LIMS acronym. The specific criteria used will quite likely be different from the ones used in your screening as the types of systems or the specific functionality required will be different.
For the purpose of this demonstration, we will not attempt to screen all of the open-source LIMS available. Instead, we will select a subset of the systems that have been announced and attempt to apply our protocol to them, hopefully screening out the systems not meeting our requirements so that the number of in-depth evaluations can be reduced. Part of the difficulty here is the broad range of fields that the term LIMS covers. There are general purpose LIMS, which can be configured to handle a wide range of samples, as well as targeted LIMS, which are designed to fill a particular niche. As such, you can find LIMS targeting specific areas as drinking water/waste water analysis, mining, radioisotopes, proteomics, and genetic analysis. Whether you make it a formal part of the screening process or isolate them while collecting the applications to be surveyed, you will need to filter out the systems which will not handle the type of samples you are dealing with. This issue is particularly prominent with LIMS, but it will likely be encountered when screening other types of software as well. The following are the systems that we will include in this attempt:
For simplicity in comparing results, I've reordered the screening table so that the first column contains the criteria being evaluated and the other columns correspond to the evaluation results for the LIMS being evaluated, with the bottom row reserved for the corresponding score summary. I have also added two subdivisions under "system functionality" for the programming language and the operating system used. Depending on the types of systems you are surveying, you will likely be including additional subdivisions. Scoring can be handled several ways. In this example, you might list the programming language in the appropriate cell. Depending on whether you have a team member competent in that language, you can use the results to set the value for the main criteria, e.g. system functionality. Alternately, while you will record the information for programming language in your survey notebook, you can insert an actual numerical result into the corresponding cell so that the sub-criteria can be evaluated separately or used to generate a value for the main criteria field.
Normally, the best starting approach is to check for existing reviews of systems, but here we have a problem in finding any. While references to Bika LIMS were common, and it was referenced in a number of scientific papers, actual reviews of the product were hard to come by and were usually for older versions. No reviews were found for eyeLIMS and for Open-LIMS; the closest thing I found to an impartial review has been two postings on Joel Limardo's LIMSExpert blog.
Based on the above summary scores, we would definitely filter out both eyeLIMS and Open-LIMS, while Bika LIMS would justify a more in-depth evaluation.
* Note: The rating for Open-LIMS may need to be revisited, as it appears that major development work on this system has been taken over by Joel Limardo of ForwardPhase Technologies, so several of the rated parameters may be subject to major shifts. It is currently unclear whether this is a joint project with the original developer or a fork.
In support of a project to prepare published evaluations of various FLOSS applications, we have reviewed the FLOSS literature, focusing particularly on any assessment or evaluation documents. While many described proposed evaluation methods, none of them appear to be particularly popular or have developed an active community around them. Several review papers on the topic, while identifying multiple methods and their advantages, found flaws in all of them, particularly in terms of being able to perform a quality assessment on any FLOSS application. Many of the described systems were explicitly focused on a single class of FLOSS applications, such as library management systems.
By consolidating suggestions and procedures from a number of these papers, we synthesized a general survey process to allow us to quickly assess the status of any given type of FLOSS applications, allowing us to triage them and identify the most promising candidates for in-depth evaluation. Note that this process is designed for performing high-level surveys, it is not designed to perform the in-depth evaluations required for product selection.
As a minor aside, in the course of researching this article I was surprised by the high percentage of the published papers on FLOSS which were published in classic subscription journals, as opposed to any of the various open-source journals available. Somehow it seems like a curious disconnect not to publish articles on open-source Software in open-source journals. Whether this is just habit of submission or due to more considered reasons would be interesting to know.
- "What is free software?". GNU Project. Free Software Foundation, Inc. 2015. http://www.gnu.org/philosophy/free-sw.html. Retrieved 17 June 2015.
- "The Open Source Definition". Open Source Initiative. 2015. http://opensource.org/osd. Retrieved 17 June 2015.
- Schießle, Björn (12 August 2012). "Free Software, Open Source, FOSS, FLOSS - same same but different". Free Software Foundation Europe. https://fsfe.org/freesoftware/basics/comparison.en.html. Retrieved 5 June 2015.
- "RepOSS: A Flexible OSS Assessment Repository" (PDF). Northeast Asia OSS Promotion Forum WG3. 5 November 2012. http://events.linuxfoundation.org/images/stories/pdf/lceu2012_date.pdf. Retrieved 05 May 2015.
- Doll, Brian (23 December 2013). "10 Million Repositories". GitHub, Inc. https://github.com/blog/1724-10-millionrepositories. Retrieved 08 August 2015.
- Sarrab, Mohamed; Elsabir, Mahmoud; Elgamel, Laila (March 2013). "The Technical, Non-technical Issues and the Challenges of Migration to Free and Open Source Software" (PDF). IJCSI International Journal of Computer Science Issues 10 (2.3). http://ijcsi.org/papers/IJCSI-10-2-3-464-469.pdf.
- Wheeler, David A. (14 June 2011). "Free-Libre / Open Source Software (FLOSS) is Commercial Software". dwheeler.com. http://www.dwheeler.com/essays/commercial-floss.html. Retrieved 28 May 2015.
- Stol, Klaas-Jan; Ali Babar, Muhammad (2010). "A Comparison Framework for Open Source Software Evaluation Methods". In Ågerfalk, P.J.; Boldyreff, C.; González-Barahona, J.M.; Madey, G.R.; Noll, J. Open Source Software: New Horizons. Springer. pp. 389–394. doi:10.1007/978-3-642-13244-5_36. ISBN 9783642132445.
- Fritz, Catherine A.; Carter, Bradley D. (23 August 1994). A Classification And Summary Of Software Evaluation And Selection Methodologies. Mississippi State, MS: Department of Computer Science, Mississippi State University. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.4470.
- "OpenBRR, Business Readiness Rating for Open Source: A Proposed Open Standard to Facilitate Assessment and Adoption of Open Source Software" (PDF). OpenBRR. 2005. http://docencia.etsit.urjc.es/moodle/file.php/125/OpenBRR_Whitepaper.pdf. Retrieved 13 April 2015.
- Wasserman, A.I.; Pal, M.; Chan, C. (10 June 2006). "The Business Readiness Rating: a Framework for Evaluating Open Source" (PDF). Proceedings of the Workshop on Evaluation Frameworks for Open Source Software (EFOSS) at the Second International Conference on Open Source Systems. Lake Como, Italy. pp. 1–5. Archived from the original on 11 January 2007. http://web.archive.org/web/20070111113722/http://www.openbrr.org/comoworkshop/papers/WassermanPalChan_EFOSS06.pdf. Retrieved 15 April 2015.
- Majchrowski, Annick; Deprez, Jean-Christophe (2008). "An Operational Approach for Selecting Open Source Components in a Software Development Project". In O'Connor, R.; Baddoo, N.; Smolander, K.; Messnarz, R.. Software Process Improvement. Springer. pp. 176–188. doi:10.1007/978-3-540-85936-9_16. ISBN 9783540859369.
- Petrinja, E.; Nambakam, R.; Sillitti, A. (2009). "Introducing the Open Source Maturity Model". ICSE Workshop on Emerging Trends in Free/Libre/Open Source Software Research and Development, 2009. IEEE. pp. 37–41. doi:10.1109/FLOSS.2009.5071358. ISBN 9781424437207.
- Deprez,Jean-Christophe; Alexandre, Simon (2008). "Comparing Assessment Methodologies for Free/Open Source Software: OpenBRR and QSOS". In Jedlitschka, Andreas; Salo, Outi. Product-Focused Software Process Improvement. Springer. pp. 189-203. doi:10.1007/978-3-540-69566-0_17. ISBN 9783540695660.
- Wasserman, Anthony I.; Pal, Murugan (2010). "Evaluating Open Source Software" (PDF). Carnegie Mellon University - Silicon Valley. Archived from the original on 18 February 2015. https://web.archive.org/web/20150218173146/http://oss.sv.cmu.edu/readings/EvaluatingOSS_Wasserman.pdf. Retrieved 31 May 2015.
- Jadhav, Anil S.; Sonar, Rajendra M. (March 2009). "Evaluating and selecting software packages: A review". Information and Software Technology 51 (3): 555–563. doi:10.1016/j.infsof.2008.09.003.
- Pani, F.E.; Sanna, D. (11 June 2010). "FAME, A Methodology for Assessing Software Maturity". Atti della IV Conferenza Italiana sul Software Libero. Cagliari, Italy.
- Pani, F.E.; Concas, G.; Sanna, D.; Carrogu, L. (2010). "The FAME Approach: An Assessing Methodology". In Niola, V.; Quartieri, J.; Neri, F.; Caballero, A.A.; Rivas-Echeverria, F.; Mastorakis, N. (PDF). Proceedings of the 9th WSEAS International Conference on Telecommunications and Informatics. Stevens Point, WI: WSEAS. ISBN 9789549260021. http://www.wseas.us/e-library/conferences/2010/Catania/TELE-INFO/TELE-INFO-10.pdf.
- Pani, F.E.; Concas, G.; Sanna, S.; Carrogu, L. (August 2010). "The FAMEtool: an automated supporting tool for assessing methodology" (PDF). WSEAS Transactions on Information Science and Applications 7 (8): 1078–1089. http://www.wseas.us/e-library/transactions/information/2010/88-137.pdf.
- Pani, F.E.; Sanna, D.; Marchesi, M.; Concas, G. (2010). "Transferring FAME, a Methodology for Assessing Open Source Solutions, from University to SMEs". In D'Atri, A.; De Marco, M.; Braccini, A.M.; Cabiddu, F.. Management of the Interconnected World. Springer. pp. 495–502. doi:10.1007/978-3-7908-2404-9_57. ISBN 9783790824049.
- Soto, M.; Ciolkowski, M. (2009). "The QualOSS open source assessment model measuring the performance of open source communities". 3rd International Symposium on Empirical Software Engineering and Measurement, 2009. IEEE. pp. 498-501. doi:10.1109/ESEM.2009.5314237. ISBN 9781424448425.
- Soto, M.; Ciolkowski, M. (2009). "The QualOSS Process Evaluation: Initial Experiences with Assessing Open Source Processes". In O'Connor, R.; Baddoo, N.; Cuadrado-Gallego, J.J.; Rejas Muslera, R.; Smolander, K.; Messnarz, R.. Software Process Improvement. Springer. pp. 105–116. doi:10.1007/978-3-642-04133-4_9. ISBN 9783642041334.
- Haaland, Kirsten; Groven, Arne-Kristian; Glott, Ruediger; Tannenberg, Anna (1 July 2010). "Free/Libre Open Source Quality Models - a comparison between two approaches" (PDF). 4th FLOSS International Workshop on Free/Libre Open Source Software. Jena, Germany. pp. 1–17. http://publications.nr.no/directdownload/publications.nr.no/5444/Haaland_-_Free_Libre_Open_Source_Quality_Models-_a_compariso.pdf. Retrieved 15 April 2015.
- Hauge, O.; Osterlie, T.; Sorensen, C.-F.; Gerea, M. (2009). "An empirical study on selection of Open Source Software - Preliminary results". ICSE Workshop on Emerging Trends in Free/Libre/Open Source Software Research and Development, 2009. IEEE. pp. 42-47. doi:10.1109/FLOSS.2009.5071359. ISBN 9781424437207.
- Ayala, Claudia; Cruzes, Daniela S.; Franch, Xavier; Conradi, Reidar (2011). "Towards Improving OSS Products Selection – Matching Selectors and OSS Communities Perspectives". In Hissam, S.; Russo, B.; de Mendonça Neto, M.G.; Kon, F.. Open Source Systems: Grounding Research. Springer. pp. 244–258. doi:10.1007/978-3-642-24418-6_17. ISBN 9783642244186.
- Chan, C.; enugroho; Wasserman, T. (17 April 2013). "Business Readiness Rating (BRR)". SourceForge. https://sourceforge.net/projects/openbrr/. Retrieved 21 April 2015.
- Galli, Peter (24 April 2006). "OpenBRR Launches Closed Open-Source Group". eWeek. QuinStreet, Inc. http://www.eweek.com/c/a/Linux-and-Open-Source/OpenBRR-Launches-Closed-OpenSource-Group. Retrieved 13 April 2015.
- "Welcome to Business Readiness Rating: A FrameWork for Evaluating OpenSource Software". OpenBRR. Archived from the original on 24 December 2014. https://web.archive.org/web/20141224233009/http://www.openbrr.org/. Retrieved 14 April 2015.
- Arjona, Laura (6 January 2012). "What happened to OpenBRR (Business Readiness Rating for Open Source)?". The Bright Side. https://larjona.wordpress.com/2012/01/06/what-happened-to-openbrr-business-readiness-rating-for-open-source/. Retrieved 13 April 2015.
- "Welcome to OSSpal". OSSpal. http://osspal.org/. Retrieved 18 April 2015.
- Silva, Chamindra de (20 December 2009). "10 questions to ask when selecting open source products for your enterprise". TechRepublic. CBS Interactive. http://www.techrepublic.com/blog/10-things/10-questions-to-ask-when-selecting-open-source-products-for-your-enterprise/. Retrieved 13 April 2015.
- Phipps, Simon (21 January 2015). "7 questions to ask any open source project". InfoWorld. InfoWorld, Inc. http://www.infoworld.com/article/2872094/open-source-software/seven-questions-to-ask-any-open-source-project.html. Retrieved 10 April 2015.
- Padin, Sandro (3 January 2014). "How I Evaluate Open-Source Software". 8th Light, Inc. https://blog.8thlight.com/sandro-padin/2014/01/03/how-i-evaluate-open-source-software.html. Retrieved 01 June 2015.
- Metcalfe, Randy (1 February 2004). "Top tips for selecting open source software". OSSWatch. University of Oxford. http://oss-watch.ac.uk/resources/tips. Retrieved 23 March 2015.
- Limardo, J. (2013). "DIY Evaluation Process". LIMSExpert.com. ForwardPhase Technologies, LLC. http://www.limsexpert.com/cgi-bin/bixchange/bixchange.cgi?pom=limsexpert3&iid=readMore;go=1363288315&title=DIY%20Evaluation%20Process. Retrieved 07 February 2015.
- Wheeler, David A. (5 August 2011). "How to Evaluate Open Source Software / Free Software (OSS/FS) Programs". dwheeler.com. http://www.dwheeler.com/oss_fs_eval.html. Retrieved 19 March 2015.
- "User Requirements Specification (URS)". validation-online.net. Validation Online. http://www.validation-online.net/user-requirements-specification.html. Retrieved 08 August 2015.
- O'Keefe, Graham (1 March 2015). "How to Create a Bullet-Proof User Requirement Specification (URS)". askaboutgmp. http://www.askaboutgmp.com/296-how-to-create-a-bullet-proof-urs. Retrieved 08 August 2015.
- "ASTM E1578-13, Standard Guide for Laboratory Informatics". West Conshohocken, PA: ASTM International. 2013. doi:10.1520/E1578. http://www.astm.org/Standards/E1578.htm. Retrieved 14 March 2015.
- "User Requirements Checklist". Autoscribe Informatics. http://www.autoscribeinformatics.com/services/user-requirements. Retrieved 10 April 2015.
- Laboratory Informatics Institute, ed. (2015). "The Complete Guide to LIMS & Laboratory Informatics – 2015 Edition". LabLynx, Inc. http://www.limsbook.com/the-complete-guide-to-lims-laboratory-informatics-2015-edition/. Retrieved 10 April 2015.
- "Part 11, Electronic Records; Electronic Signatures — Scope and Application". U.S. Food and Drug Administration. 26 August 2015. http://www.fda.gov/regulatoryinformation/guidances/ucm125067.htm. Retrieved 10 June 2015.
- Segalstad, Siri H. (2008). International IT Regulations and Compliance: Quality Standards in the Pharmaceutical and Regulated Industries. John Wiley & Sons, Inc. pp. 338. ISBN 9780470758823. http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470758821.html.
- "More articles by Cynthia Harvey". Datamation. QuinStreet, Inc. 2015. http://www.datamation.com/author/Cynthia-Harvey-6460.html. Retrieved 12 April 2015.
- Harvey, Cynthia (5 January 2015). "Open Source Software List: 2015 Ultimate List". Datamation. QuinStreet, Inc. http://www.datamation.com/open-source/open-source-software-list-2015-ultimate-list-1.html. Retrieved 12 April 2015.
- "SourceForge - Download, Develop and Publish Free Open Source Software". SourceForge. Slashdot Media. 2015. https://sourceforge.net/. Retrieved 14 June 2015.
- "GitHub: Where software is built". GitHub. GitHub, Inc. 2015. https://github.com/. Retrieved 14 June 2015.
- "Comparison of source code hosting facilities". Wikipedia. Wikimedia Foundation. 21 September 2015. https://en.wikipedia.org/w/index.php?title=Comparison_of_source_code_hosting_facilities&oldid=682090863. Retrieved 28 September 2015.
- "The Architecture of Open Source Applications". AOSA. AOSA Editors. 2015. http://aosabook.org/en/index.html. Retrieved 08 October 2015.
- Knorr, Eric (28 September 2015). "5 key trends in open source". InfoWorld. InfoWorld, Inc. http://www.infoworld.com/article/2986769/open-source-tools/5-key-trends-in-open-source.html. Retrieved 28 September 2015.
- "Google Scholar". Google, Inc. 2015. https://scholar.google.com/. Retrieved 08 August 2015.
- "Microsoft Academic Search". Microsoft Corporation. 2015. http://academic.research.microsoft.com/. Retrieved 08 August 2015.
- Wasike, Sylvia Nasambu (October 2010). "Selection Process of Open Source Software Component". http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.227.5951. Retrieved 10 August 2015.
- "Open HUB, the open source network". Open HUB. Black Duck Software, Inc. 2015. https://www.openhub.net/. Retrieved 10 August 2015.
- Netelera, Markus; Bowmanb, M. Hamish; Landac, Martin; Metz, Markus (May 2012). "GRASS GIS: A multi-purpose open source GIS". Environmental Modelling & Software 31: 124–130. doi:10.1016/j.envsoft.2011.11.014.
- Khanine, Dmitri (7 May 2015). ""Meaningful Use" Regulations of Medical Information in Health IT". Toad World - Oracle Community. Dell Software, Inc. http://www.toadworld.com/platforms/oracle/b/weblog/archive/2015/05/07/quot-meaningful-use-quot-regulations-of-medical-information-in-health-it. Retrieved 12 June 2015.
- Khanine, Dmitri (6 May 2015). "Open-Source Medical Record Systems of 2015". Toad World - Oracle Community. Dell Software, Inc. http://www.toadworld.com/platforms/oracle/b/weblog/archive/2015/05/06/open-source-medical-record-systems-of-2015. Retrieved 28 May 2015.
- Davis, Ashley (2008). "Enterprise Resource Planning Under Open Source Software". In Ferran, Carlos; Salim, Ricardo. Enterprise Resource Planning for Global Economies: Managerial Issues and Challenges. Hershey, PA: IGI Global. doi:10.4018/978-1-59904-531-3.ch004. ISBN 9781599045313.
- Sarrah, Mohamed; Rehman, Osama M. Hussain (2013). "Selection Criteria of Open Source Software: First Stage for Adoption". International Journal of Information Processing and Management 4 (4): 51–58. doi:10.4156/ijipm.vol4.issue4.6.
- Foote, Amanda (2010). "The Myth of Free: The Hidden Costs of Open Source Software". Dalhousie Journal of Interdisciplinary Management 6 (Spring 2010): 1–9. doi:10.5931/djim.v6i1.31.
- Heinlein, Robert A. (1997) . The Moon Is a Harsh Mistress. New York, NY: Tom Doherty Associates. pp. 8–9. ISBN 9780312863555.
- Fleming, Ian (2014). "ISO 9126 Software Quality Characteristics". SQA Definition. http://www.sqa.net/iso9126.html. Retrieved 18 June 2015.
- Taylor, Dave (2015). "Murphy's Laws". Dave Taylor's Educational & Guidance Counseling Services. http://davetgc.com/Murphys_Law.html. Retrieved 08 August 2015.
- Abran, Alain; Khelifi, Adel; Suryn, Witold; Seffah, Ahmed (2003). "Usability Meanings and Interpretations in ISO Standards". Software Quality Journal 11 (4): 325–338. doi:10.1023/A:1025869312943.
- Andreasen, M.S.; Nielsen, H.V.; Schrøder, S.O.; Stage, J. (2006). "Usability in open source software development: Opinions and practice" (PDF). Information Technology and Control 35 (3A): 303-312. http://itc.ktu.lt/itc353/Stage353.pdf.
- Saxena, S.; Dubey, S.K. (January 2013). "Impact of Software Design Aspects on Usability". International Journal of Computer Applications 61 (22): 48-53. doi:10.5120/10233-5043. http://www.ijcaonline.org/archives/volume61/number22/10233-5043.
- AllAboutUX.org volunteers (8 October 2010). "User experience definitions". All About UX. http://www.allaboutux.org/ux-definitions. Retrieved 08 August 2015.
- Gube, Jacob (5 October 2010). "What Is User Experience Design? Overview, Tools And Resources". Smashing Magazine. Smashing Magazine GmbH. http://www.smashingmagazine.com/2010/10/what-is-user-experience-design-overview-tools-and-resources/. Retrieved 08 August 2105.
- Reitz, Kenneth (2014). "Choosing a License". The Hitchhiker's Guide to Python!. http://docs.python-guide.org/en/latest/writing/license/. Retrieved 13 May 2015.
- Atwood, Jeff (3 April 2007). "Pick a License, Any License". Coding Horror: Programming and Human Factors. http://blog.codinghorror.com/pick-a-license-any-license/. Retrieved 13 May 2015.
- "The Open Source Definition (Annotated)". Open Source Initiative. 2015. http://opensource.org/docs/definition.php. Retrieved 28 May 2015.
- Hocevar, Sam (2015). "WTFPL – Do What the Fuck You Want to Public License". WTFPL.net. http://www.wtfpl.net/. Retrieved 16 June 2015.
- "Licenses & Standards". Open Source Initiative. 2015. http://opensource.org/licenses. Retrieved 13 May 2015.
- "TL;DRLegal - Software Licenses Explained in Plain English". FOSSA, Inc. 2015. https://tldrlegal.com/. Retrieved 16 June 2015.
- Wheeler, David A. (16 February 2014). "Make Your Open Source Software GPL-Compatible. Or Else". dwheeler.com. http://www.dwheeler.com/essays/gpl-compatible.html. Retrieved 20 March 2015.
- Wheeler, David A. (27 September 2007). "The Free-Libre / Open Source Software (FLOSS) License Slide". dwheeler.com. http://www.dwheeler.com/essays/floss-license-slide.html. Retrieved 28 May 2015.
- "What Is Docker?". Docker.com. Docker, Inc. https://www.docker.com/whatisdocker. Retrieved 18 June 2015.
- Knorr, Eric (23 March 2015). "When will we see Docker in production?". InfoWorld. InfoWorld, Inc. http://www.infoworld.com/article/2900333/cloud-computing/scenes-from-the-docker-revolution.html. Retrieved 18 June 2015.
- Emadeen, Hamza (1 August 2010). "Bika : Free , Open source LIMS – Laboratory Information Management System for Windows , Linux and Mac OSX". Goomedic.com. http://www.goomedic.com/bika-free-open-source-lims-laboratory-information-management-system-for-windows-linux-and-mac-osx.html. Retrieved 21 July 2015.
- Limardo, J. (2011). "Open-LIMS Analysis Results". LIMSExpert.com. ForwardPhase Technologies, LLC. http://www.limsexpert.com/cgi-bin/bixchange/bixchange.cgi?pom=limsexpert3&iid=readMore;go=1304895280&title=Open-LIMS%20Analysis%20Results. Retrieved 07 February 2015.
- Limardo, J. (2011). "Scoring Open-LIMS". LIMSExpert.com. ForwardPhase Technologies, LLC. http://www.limsexpert.com/cgi-bin/bixchange/bixchange.cgi?pom=limsexpert3&iid=readMore;go=1305075855&title=Scoring%20Open-LIMS. Retrieved 07 February 2015.
- Glover, H.D. (2013). "Academia Should Embrace Open Access Scholarly Publishing". Open Journal of Accounting 2 (4): 95–96. doi:10.4236/ojacct.2013.24012.
This article has not officially been published in a journal. However, this presentation is largely faithful to the original paper. The content has been edited for grammar, punctuation, and spelling. Additional error correction of a few reference URLs and types as well as cleaning up of the glossary also occurred. Redundancies and references to entities that don't offer open-source software were removed from the FLOSS examples in Table 2. DOIs and other identifiers have been added to the references to make them more useful. This article is being made available for the first time under the Creative Commons Attribution-ShareAlike 4.0 International license, the same license used on this wiki.