Open Source Software Resilience Framework (OSSRF)

Source-o-grapher Tool is available as open source software under the MIT License in this Github repository.
For any assistance you might need please contact us at akritiko@csd.auth.gr.

Following the City Resilience Framework paradigm, Open Source Software Resiliency Framework is also being structured to four (4) dimensions that are then analyzed on twelve (12) goals and, on a third level, on a set of indicators.

SOURCE CODE

Goals

  • Architecture: this goal is related the aspects of the source code that structurally strengthen the project and promote seamless functionality and scaling. We propose this goal based on the \Minimal human vulnerability" goal that can be found in CRF which promotes cities with infrastructures that provide stability, in terms of housing, effectiveness, in terms of sanitation and robustness, in terms of access to energy supply or drinking water.
  • Maintainability: this goal relates to the maintainability of source code. OSS projects, being collaboratively and voluntarily growing projects, often need to go through phases of refactoring, correction or undergo necessary improvements. Especially after a crisis (i.e. a competitive project wins a big part of the user base) an OSS project needs to regroup as soon as possible. In the CRF the \Diverse livelihoods and employment" goal similarly aims at maintaining the social capital after a shock (i.e. with supportive nancing mechanisms) and improve / correct by training and promoting business development and innovation.
  • Security & Testing: this goal is related to the aspects that promote the security and correctness of the project. As in \E ective safeguards to human health and life", the goal found in CRF this goal is about foundational structures that ensure a tested functional system.

Indicators

Robustness: is defined as ``the degree to which an executable work product continues to function properly under abnormal conditions or circumstances''. We propose that this should be a qualitative indicator described with the following values provided by an expert. 1 - worst, 2 - little, 3 - moderate, 4 - good, 5 - great.

Scalability: is defined as ``the ease with which an application or component can be modified to expand its existing capabilities. It includes the ability to accommodate major volumes of data''. We propose that this should be a qualitative indicator described with the following values provided by an expert. 1 - worst, 2 - little, 3 - moderate, 4 - good, 5 - great.

Usability: is defined as ``the degree to which the software product makes it easy for users to operate and control it''. We propose that this should be a qualitative indicator described with the following values provided by an expert. 1 - worst, 2 - little, 3 - moderate, 4 - good, 5 - great.

NOTE: We base our choice for the aforementioned indicators, robustness, scalability and usability, to be qualitative (Likert scale) in [14] where Wasserman treats those indicators the same way in OSSpal.

Effectiveness: is defined as the percentage of critical bugs fixed the last six months to all bugs fixed in the last six months. This indicator derives from the SQO-OSS quality model as published in [34].

At this point we would like to clarify that the Effectiveness indicator can follow the aforementioned definition if the OSS project's issue tracker labels the critical bugs. In any other case this indicator should be considered qualitative described with the following values provided by an expert. 1 - worst, 2 - little, 3 - moderate, 4 - good, 5 - great.

Corrections: is proposed as part of the maintainability goal to ``try and capture the degree to which the software can be modified to serve correction purposes''.

Improvements: is proposed as part of the maintainability goal to ``try and capture the degree to which the software can be modified to serve improvement purposes''.

NOTE: Since both (Corrections, Improvements) are indicators that can apply to several different aspects of the software (i.e. changes of the environment of the software, the requirements, the functional specification) as Miguel states in [5] , we proposed them as qualitative indicators described with the following values provided by an expert. 1 - worst, 2 - little, 3 - moderate, 4 - good, 5 - great.

Security: is defined as ``the protection of system items from accidental or malicious access, use, modification, destruction, or disclosure''. We propose that this should be a qualitative indicator described with the following values provided by an expert. 1 - worst, 2 - little, 3 - moderate, 4 - good, 5 - great. Again, we base our choice for the this indicator in [14] where Wasserman treats this indicator the same way in OSSpal.

Testing process: is proposed as a boolean indicator to verify that the OSS project follows a typical process as far as the testing is concerned (i.e. unit testing, test driven design techniques). In [35] the authors provide an empirical study that shows the importance of test driven techniques in software development.

Coverage: is defined as ``the ratio of basic code blocks that were exercised by some test, to the total number of code blocks in the system under test'' [36]. Therefore this is proposed as a percentage indicator.

BUSINESS & LEGAL

Goals

  • License: this goal relates to legal aspects in relation with an OSS project. As with the goal in CRF ``Comprehensive security \& rule of law'' this goal describes the legal framework under which the OSS project is published in order to pro-actively secure its openness and availability to be used, reused and shared according to the license terms.
  • Market: this goal is proposed based on the ``Sustainable economy'' goal of the CRF that takes under consideration the aspects of business environment, diverse economic base and business continuity planning. In OSS we are, respectively, studying the aspects related to market and commercial use of an OSS project.
  • Support: this goal is related to a rather controversial subject in OSS. In [32], Daffara refers to the myth that OSS ``is not reliable or supported'' and argues against it. With the adoption of OSS software in vital parts of companies and organizations (i.e. web servers running Apache and Linux) it has become evident that professional support is key for an OSS project to become a success. Support helps the end user to feel safe (i.e. when crisis strikes or during a shock) and provides a sense of belonging. Same goes for the ``Collective identity and community support'' goal that we find in CRF, referring to the beneficial role of collective identity and local community support, especially in times of crisis.

Indicators

License type: Both the existence or not of a license and the type of the license of an OSS project play a significant role in the evolution and success of the project. In [25] the authors study how licenses, depending on the level or restrictiveness (i.e. copyright versus copyleft) or the level of persistence (GPL versus LGPL) can affect the OSS project in terms of adoption and market. In [37] Lindman et. al. argue that licensing can often be a complex task for OSS teams and that's why we find structured license selection processes mainly in big OSS projects. Taking the above under consideration this indicator is being described by the following values: 1 - all restrictive / commercial, 2 - not licensed, 3 - mixed license, 4 - persistent / viral license license (i.e. GPL), 5 - all permissive license (i.e. MIT).

Dual licensing: Whereas dual licensing is not undeniably a success factor for an OSS project, based on the literature, studies like [38] and [32] argue that dual licensing is a key factor when it comes to commercialization of an OSS project. Therefore, we consider it a plus, when it comes to the market goal. We propose that this indicator is boolean: 0 - for non dual licensed projects and 1 - for dual licensed ones.

Commercial resources: Providing commercial resources (i.e. user guides or merchandise) is a known business model for OSS projects. We propose that this indicator is boolean with: 0 - for projects with no commercial resources and 1 - for project with commercial resources.

Commercial training: Providing commercial training (i.e. video tutorials or Massive Open Online Courses (MOOCs)) is another known business model for OSS projects. We propose that this indicator is boolean with: 0 - for projects with no commercial resources and 1 - for project with commercial resources.

NOTE: In regard with commercial resources and commercial training in [24] the authors study how well known companies like IBM and Redhat, achieved competitiveness and economic growth by providing added value services to their open source solutions.

Industry adoption: An OSS project that manages to attract the interest of the industry is more likely to succeed market wise. In [32] the authors argue that open source software boosts both innovation and software development speed whereas in [39] a scientific work about open innovation and the SME food industry authors highlight that ``open innovation offers SMEs a special avenue to better compete in the marketplace''. We propose this indicator as boolean with: 0 - indicating projects with no industry adoption and 1 - indicating projects that have been adopted by the industry.

Non profit / Foundation support: Many successful OSS projects, are supported by non profits. Some times those non profits are created to support specifically those projects (i.e. Free Software Foundation, Linux Foundation, WordPress Foundation, Blender Foundation and so forth) as mentioned in [40]. We define this indicator as boolean with 0 - for projects not supported by a non profit organization and 1 - for projects supported by a non profit organization.

For profit / company support: As with the Non profit / Foundation support indicator, the ``For profic / company support'' indicator takes under consideration the existence of a company ``attached'' or supporting an OSS software. There are examples from well known projects that have helped companies built business models around them (i.e. Redhat offering paid services for Linux installations or Automattic for WordPress). We propose this indicator as boolean with 0 - for projects not supported by a company and 1 - for projects supported by a company.

Donations: Donations, have been one of the most known ways for OSS projects to earn money, since the early days of open source software. In [41] the authors refer to donations as ``indicator of acceptance''. We propose that this indicator is boolean with 0 - for projects not accepting donations and 1 - for projects that accept donations.

INTEGRATION & REUSE

Goals

  • Initialization: this goal is proposed having in mind the ability of the project to be initialized in such a way that it will help its uninterrupted functionality. The same goal is also relative to the agility of an OSS regarding its configuration. We argue that this goal shares conceptual similarities with the ``Effective provision of critical services'' goal of the CRF that takes under consideration all those factors that predefine and protect critical assets, services and ecosystems within a city.
  • Dependencies: this goal takes under consideration the dependencies that an OSS project uses in order to function properly. Dependencies are as good for the project as their quality and resilience. This is why this goal, in order to promote the OSS project, is aligned with the ``Reduced exposure and fragility'' goal of the CRF.
  • Reuse: this goal is about the ability of the OSS project, or at least parts of it, to be used by other OSS projects. Reusability of a project, apart from making it a good candidate to complete another software's requirements is also an indicator of high quality architecture and source code. However for the purposes of our framework it aligns with the ``Reliable mobility and communications'' goal of the CRF model in the sense that it reusable components promote mobility and tend to integrated or be integrated easily with other OSS projects.

Indicators

Installability: is defined as ``the degree to which the software product can be successfully installed and uninstalled in a specified environment''. We propose that this should be a qualitative indicator described with the following values provided by an expert. 1 - worst, 2 - little, 3 - moderate, 4 - good, 5 - great.

NOTE: We base our choice for the aforementioned indicator, installability, to be qualitative (Likert scale) in [14] where Wasserman treats those indicators the same way in OSSpal.

Configurability: is defined as ``the ability of the component to be configurable''. In [42] the authors argue that highly-configurable systems lead to exponentially growing configuration spaces maeking quality assurance challenging. Based on that we propose that this should be a qualitative indicator described with the following values provided by an expert. 1 - worst, 2 - little, 3 - moderate, 4 - good, 5 - great.

Self-contained: is defined as ``the function that the component performs must be fully performed within itself". In [47] the authors conduct a performance evaluation of open source graph databases projects and conclude that self-containment makes the project a better candidate over competitive ones. To our best knowledge, there is not a well established metric for the self-containment of an OSS project we propose that this should be a qualitative indicator described with the following values provided by an expert. 1 - worst, 2 - little, 3 - moderate, 4 - good, 5 - great.

Resource Utilization: is defined as ``the degree to which the software product uses appropriate amounts and types of resources when the software performs its function under stated conditions''. In [43] the authors study operating systems and highlight that often times, an OSS project is designed with the end user in mind and thus the focus is mainly ease of use or performance and security and not resource utilization. This difficulty in having a clear metric on resource utilization led as to propose that this should be a qualitative indicator described with the following values provided by an expert. 1 - worst, 2 - little, 3 - moderate, 4 - good, 5 - great.

Complexity: or CC is defined as ``a quantitative measure of the number of linearly independent paths through a program's source code'' by McCabe [44]. The authors of [45] and [46] provide the following groups of values for the CC metric.

In the aforementioned studies there's a debate on whether the first group of values should stop at 10 or 15 describing the first group as without much risk and the second group of moderate risk. We propose an indicator that provides a 5th tier of complexity, taking under consideration the threshold of 15, as follows:

So depending on McCabe's CC metric this indicator's values are described as follows based on the risk deriving from the complexity of the product: 1 - very high risk, 2 - high risk, 3 - moderate risk, 4 - little risk, 5 - without much risk.

Modularity: is defined as ``the degree to which a system or computer program is composed of discrete components such that a change to one component has minimal impact on other components''. As Viseur states in [48] high modularity of an OSS project is a competitive advantage for developers and, at the same time, allows users to gradually discover and use functionality (i.e. Mozilla Firefox add-ons). To our best knowledge, there is not a well established metric for the modularity of an OSS project we propose that this should be a qualitative indicator described with the following values provided by an expert. 1 - worst, 2 - little, 3 - moderate, 4 - good, 5 - great.

Instability: is defined by Martin [63] as ``I = (Ce ÷ (Ca+Ce))'', that is the ratio between efferent coupling (ce) and afferent coupling (ca). This metric has a range [0,1] where I = 0 indicates a maximally stable category and I = 1 indicates a maximally instable category. The lowest the number the more stable the project therefore for this indicator the final value that we use to the framework calculations is 1 - I.

Cohesion: is measured by Cidamber and \& Kemerer in [64] using the Lack of cohesion in methods (LCOM). We then use the thresholds provided by Ferreira et al in [50] to evaluate the resulting value of LCOM , based on the size of the software, as follows:

Therefore our cohesion indicator ranks between [1,3] with 1 - indicating bad cohesion, 2 - regular cohesion and 3 - good cohesion.

NOTE: At this point we would like to clarify that the Instability and Cohesion indicators can follow the aforementioned definitions if the OSS project follows the object oriented development style. If the OSS project is not object oriented we propose that the aforementioned indicators should be considered should qualitative indicators described with the following values provided by an expert. 1 - worst, 2 - little, 3 - moderate, 4 - good, 5 - great.

SOCIAL (COMMUNITY)

Goals

  • Developement Process \& Governance: this goal was designed and fully aligns with the ``Effective leadership and management'' of the CRF. Its indicators verify that the project has all the necessary information to guide its users and developers through the process of evolving the software. It also provides the necessary mechanisms to ensure a friendly and open environment that everyone can equally contribute on the project (i.e. through a governance model).
  • Developer Base: this goal is related with the development community of the project. In order for an OSS project to be successful this part of the community needs to always stay motivated and active. This goal is similar to ``Integrated development planning'' of the CRF.
  • User Base: this goal is related with the end users of the OSS project. It can potentially include members of the development community that are also users of the software or the have undertaken the role of a tester. This part of the community are the ``customers'' of the OSS project and hence it is really important to be keep the engaged and motivated as their feedback (i.e. feature proposals, bug reports and so forth is invaluable). This goal is aligned with the ``Empowered stakeholders'' of CRF.

Indicators

Governance model: the existence of a governance model for an OSS project is considered mandatory, especially if it wants to become self sustainable using one or more of the business models discussed in the business and legal dimension. We propose this indicator as boolean with: 0 - for projects that do not utilize a governance model and 1 - for project that utilize governance models.

Project Road-map: project roadmap, as with the Governance model, is an indicator of a well organized project with clear goals and milestones that wants to clearly share with its community. We propose this indicator as boolean with: 0 - for projects that do not use roadmaps and 1 - for projects that use roadmaps.

NOTE: In [51] the authors study community aspects for well known, hybrid, OSS projects with commercial success and highlight both governance and roadmap existence as indicators of healthy OSS communities.

Code of conduct: the fact that OSS projects form global, diverse communities that work asynchronously need to set the rules of communication and interaction between their members. We propose this indicator as boolean with: 0 - for projects that do not use a code of conduct and 1 - for projects that use one.

Documentation standards: another critical indicator for the success of community driven projects, as are OSS projects is standards for the documentation of the source code. This helps newcomers to easily understand the existing code base and smoothly become a part of the team.

NOTE: In [52] the authors investigate work practices used by contributors to well established OSS projects and highlight both Code of conduct and Documentation standards.

Coding standards: coding standards have been always a part of the OSS projects documentation. They serve as the source code development manual for the developers in the community of the OSS and they have been adopted by the leaders of the free / open source software movement (Linux Kernel, GNU, and so forth). Coding standards indicate professionalism and maturity for the OSS project. We propose this indicator as boolean with: 0 - for projects that do not use coding standards and 1 - for projects that use coding standards.

NOTE: The following indicators were inspired mainly by the works of Robles et al [16] and Wasserman [14]. We will be providing any extra references per indicator where necessary.

Developers Attracted: is proposed as the rate of developers joined the project in the last six (6) months to the total number of developers. The indicator's value ranks between [0,1].

Active Developers: is proposed as the rate of developers that have been active, contributing to the project, the last six (6) months to the total number of developers. The indicator's value ranks between [0,1].

Number of open issues: is proposed as the number of the current open issues to the total issues reported since the beginning of the project. This indicator gives us a perspective of the activity of the community regarding bug reporting. This indicator ranks between [0,1]. The lowest the number the less open issues is has therefore for this indicator the final value that we use to the framework calculations is 1 - Number of open issues.

Open / Closed issues: is proposed as the number of the number of issues opened in the last twelve (12) months to the number of issues closed int the last (12) months. This indicator gives us a perspective of the activity of the community regarding bug fixing. This indicator ranks between [0,1].

Source Code Documentation: is defined as the rate between the number of comment lines of code (CLOC) to the number of the lines of code (LOC). This indicator gives us a perspective of the documentation effort done regarding to the source code. This indicator ranks between [0,1].

Localization Process: OSS project localization process (i.e. translation of the software and or project resources) is a best practice that is backed up by literature. In [54] the authors argue that software translations benefit the evolution and growth of OSS and thus, should be one of the project leaders priorities.

Issue tracking activity (reporting bugs): is defined as the number of bugs reported the last twelve (12) months to the number of reported bugs since the beginning of the software. This indicator ranks between [0,1].

NOTE: We would like to clarify that the co existence of bug reports and open / closed issues serves the need to separately measure the bug reports from the end users from the technical issues usually reported by the developers of the project. We acknowledge that sometimes the developers of an OSS projects also function as end users but, as the authors of [55] state, often times end users' reports are misclassified as bugs when they are really features (i.e. code enhancement requests or customization requests).

User guide (completeness): this indicators has the goal of evaluating the maturity of an OSS project's user guide. User guides have been adopted by the most evolved and well known OSS projects (for example GNU Emacs user guides [56]). We propose the indicator's values as follows: 1 - non existing user guide, 2 - on hiatus / discontinued, 3 - pre release (alpha / beta / release candidate), 4 - released (version 1.0+), 5 - commercial versions of the guide.

Resilience Determination Mechanism

Since the evaluation of a project regarding its resilience is based on indicators weneed a mechanism to determine whether the OSS project under review is resilient and, on a second level, how its resiliency changes as it evolves. Starting to the indicators level we will consider an OSS project successful towards a resilience goal when it is considered resilient at least to 50% of the goals ingredients.

Moving to the dimensions level, an OSS project will be considered successful towards a resilience dimension when it is considered resilient at least to 50% of the goals of the specific dimension. Likewise, on a project level, the OSS project is considered resilient when at least two (2) out of four (4) dimension (50%) are considered resilient.

References

[1] Jeff McAffer. Microsoft joins the Open Source Initiative. https://open.microsoft.com/2017/09/26/microsoft-joins-open-source-initiative/, 2017. [Online]. [ bib ]
[2] Steve Weber. The success of open source. Harvard University Press, 2004. [ bib ]
[3] Dharmesh Thakker Max Schireson. The Money In Open-Source Software. https://techcrunch.com/2016/02/09/the-money-in-open-source-software/, 2016. [Online]. [ bib ]
[4] Eric Raymond. The cathedral and the bazaar. Philosophy & Technology, 12(3):23, 1999. [ bib ]
[5] José P Miguel, David Mauricio, and Glen Rodríguez. A review of software quality models for the evaluation of software products. arXiv preprint arXiv:1412.2977, 2014. [ bib ]
[6] Vishal Midha and Prashant Palvia. Factors affecting the success of open source software. Journal of Systems and Software, 85(4):895--905, 2012. [ bib ]
[7] Vision Mobile. Open governance index-measuring the true openness of open source projects from android to webkit. 2011. [ bib ]
[8] City Resilience Index. City resilience framework. The Rockefeller Foundation and ARUP, 2014. [ bib ]
[9] Andreas Wieland and Carl Marcus Wallenburg. The influence of relational competencies on supply chain resilience: a relational view. International Journal of Physical Distribution & Logistics Management, 43(4):300--320, 2013. [ bib ]
[10] C Warren Axelrod. Investing in software resiliency. 2009. [ bib ]
[11] J Da Silva and B Morera. City resilience framework. Arup & Rockefeller Foundation. Online: http://publications. arup. com/Publications/C/City_Resilience_Framework. aspx [12/15/2015], 2014. [ bib ]
[12] Organización Internacional de Normalización. ISO-IEC 25010: 2011 Systems and Software Engineering-Systems and Software Quality Requirements and Evaluation (SQuaRE)-System and Software Quality Models. ISO, 2011. [ bib ]
[13] Anthony Wasserman, Murugan Pal, and Christopher Chan. The business readiness rating model: an evaluation framework for open source. In Proceedings of the EFOSS Workshop, Como, Italy, 2006. [ bib ]
[14] Anthony I Wasserman, Xianzheng Guo, Blake McMillian, Kai Qian, Ming-Yu Wei, and Qian Xu. Osspal: Finding and evaluating open source software. In IFIP International Conference on Open Source Systems, pages 193--203. Springer, 2017. [ bib ]
[15] Sandro Andrade and Filipe Saraiva. Principled evaluation of strengths and weaknesses in floss communities: A systematic mixed methods maturity model approach. In IFIP International Conference on Open Source Systems, pages 34--46. Springer, 2017. [ bib ]
[16] Jose Teixeira, Gregorio Robles, and Jesús M González-Barahona. Lessons learned from applying social network analysis on an industrial free/libre/open source software ecosystem. Journal of Internet Services and Applications, 6(1):14, 2015. [ bib ]
[17] 100 Resilient Cities. http://www.100resilientcities.org/, 2013. [Online]. [ bib ]
[18] James Piggot and Chintan Amrit. How healthy is my project? open source project attributes as indicators of success. In IFIP International Conference on Open Source Systems, pages 30--44. Springer, 2013. [ bib ]
[19] PHPQA Tool, Official website. https://edgedesigncz.github.io/phpqa/. [Online]. [ bib ]
[20] David A Wheeler. More than a gigabuck: Estimating gnu/linux’s size, 2001. [ bib ]
[21] Robert Martin. Oo design quality metrics. An analysis of dependencies, 12:151--170, 1994. [ bib ]
[22] Javier Luis Cánovas Izquierdo and Jordi Cabot. Enabling the definition and enforcement of governance rules in open source systems. In Proceedings of the 37th International Conference on Software Engineering-Volume 2, pages 505--514. IEEE Press, 2015. [ bib ]
[23] Ross Gardler and Gabriel Hanganu. Governance models. Open Source Software Watch, last modified February, 14, 2012. [ bib ]
[24] Neeshal Munga, Thomas Fogwill, and Quentin Williams. The adoption of open source software in business models: a red hat and ibm case study. In Proceedings of the 2009 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists, pages 112--121. ACM, 2009. [ bib ]
[25] M Välimäki and Ville Oksanen. Evaluation of open source licensing models for a company developing mass market software. Law and Technology, 2002. [ bib ]
[26] Gitstats Tool - Official Website. http://gitstats.sourceforge.net/. [Online]. [ bib ]
[27] OKapi Github Repository. https://github.com/liip/Okapi. [Online]. [ bib ]
[28] WooCommerce Github Repository. https://github.com/liip/Okapi. [Online]. [ bib ]
[29] Jonas Gamalielsson and Björn Lundell. Sustainability of open source software communities beyond a fork: How and why has the libreoffice project evolved? Journal of Systems and Software, 89:128 -- 145, 2014. [ bib | DOI | http ]
Keywords: Open Source software, Fork, Community evolution
[30] Apostolos Kritikos and Ioannis Stamelos. Open source software resilience framework. In IFIP International Conference on Open Source Systems, pages 39--49. Springer, 2018. [ bib ]
[31] A. Ampatzoglou, A. Gkortzis, S. Charalampidou, and P. Avgeriou. An embedded multiple-case study on oss design quality assessment across domains. In 2013 ACM / IEEE International Symposium on Empirical Software Engineering and Measurement, pages 255--258, Oct 2013. [ bib | DOI ]
Keywords: program compilers;public domain software;software quality;embedded multiplecase study;OSS design quality assessment;open source software;code reuser;fitting research method;quality metrics;Measurement;Software engineering;Open source software;Games;Context;Couplings;design quality;open source;application domain
[32] Carlo Daffara. The sme guide to open source software. recuperado el, 1, 2009. [ bib ]
[33] Karl Michael Popp. Best Practices for commercial use of open source software: Business models, processes and tools for managing open source software. BoD--Books on Demand, 2015. [ bib ]
[34] Ioannis Samoladas, Georgios Gousios, Diomidis Spinellis, and Ioannis Stamelos. The sqo-oss quality model: measurement based open source software evaluation. In IFIP International Conference on Open Source Systems, pages 237--248. Springer, 2008. [ bib ]
[35] Lech Madeyski. Test-driven development: An empirical evaluation of agile practice. Springer Science & Business Media, 2009. [ bib ]
[36] I Baxter. Branch coverage for arbitrary languages made easy: Transformation systems to the rescue. IW APA TV2/IC SE2001. http://techwell. com/sites/default/files/articles/XUS1173972file1_0. pdf, 2001. [ bib ]
[37] Juho Lindman, Anna Paajanen, and Matti Rossi. Choosing an open source software license in commercial context: A managerial perspective. In 36th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA 2010), pages 237--244. IEEE, 2010. [ bib ]
[38] Mikko Valimaki. Dual licensing in open source software industry. 2002. [ bib ]
[39] I Sam Saguy and Vera Sirotinskaya. Challenges in exploiting open innovation's full potential in the food industry with a focus on small and medium enterprises (smes). Trends in Food Science & Technology, 38(2):136--148, 2014. [ bib ]
[40] Javier Luis Cánovas Izquierdo and Jordi Cabot. The role of foundations in open source projects. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Society, ICSE-SEIS '18, pages 3--12, New York, NY, USA, 2018. ACM. [ bib | DOI | http ]
Keywords: open-source software, open-source software analysis, software foundations
[41] Slinger Jansen. Measuring the health of open source software ecosystems: Beyond the scope of project health. Information and Software Technology, 56(11):1508--1519, 2014. [ bib ]
[42] Jens Meinicke, Chu-Pan Wong, Christian Kästner, Thomas Thüm, and Gunter Saake. On essential configuration complexity: Measuring interactions in highly-configurable systems. In Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering, ASE 2016, pages 483--494, New York, NY, USA, 2016. ACM. [ bib | DOI | http ]
Keywords: Configurable Software, Feature Interaction, Variability-Aware Execution
[43] Abraham Silberschatz, Greg Gagne, and Peter B Galvin. Operating system concepts. Wiley, 2018. [ bib ]
[44] T. J. McCabe. A complexity measure. IEEE Transactions on Software Engineering, SE-2(4):308--320, Dec 1976. [ bib | DOI ]
Keywords: Basis;complexity measure;control flow;decomposition;graph theory;independence;linear;modularization;programming;reduction;software;testing;Software testing;System testing;Graph theory;Fluid flow measurement;Software measurement;Linear programming;Software engineering;Software systems;Software maintenance;National security;Basis;complexity measure;control flow;decomposition;graph theory;independence;linear;modularization;programming;reduction;software;testing
[45] Michael Bray, Kimberly Brune, David A Fisher, John Foreman, and Mark Gerken. C4 software technology reference guide-a prototype. Technical report, Carnegie-Mellon Univ Pittsburgh Pa Software Engineering Inst, 1997. [ bib ]
[46] Arthur Henry Watson, Dolores R Wallace, and Thomas J McCabe. Structured testing: A testing methodology using the cyclomatic complexity metric, volume 500. US Department of Commerce, Technology Administration, National Institute of Standards and Technology, 1996. [ bib ]
[47] Robert Campbell McColl, David Ediger, Jason Poovey, Dan Campbell, and David A Bader. A performance evaluation of open source graph databases. In Proceedings of the first workshop on Parallel programming for analytics applications, pages 11--18. ACM, 2014. [ bib ]
[48] Robert Viseur. Identifying success factors for the mozilla project. In IFIP International Conference on Open Source Systems, pages 45--60. Springer, 2013. [ bib ]
[49] Shyam R Chidamber and Chris F Kemerer. A metrics suite for object oriented design. IEEE Transactions on software engineering, 20(6):476--493, 1994. [ bib ]
[50] Kecia AM Ferreira, Mariza AS Bigonha, Roberto S Bigonha, Luiz FO Mendes, and Heitor C Almeida. Identifying thresholds for object-oriented software metrics. Journal of Systems and Software, 85(2):244--257, 2012. [ bib ]
[51] Hanna Mäenpää, Simo Mäkinen, Terhi Kilamo, Tommi Mikkonen, Tomi Männistö, and Paavo Ritala. Organizing for openness: six models for developer involvement in hybrid oss projects. Journal of Internet Services and Applications, 9(1):17, 2018. [ bib ]
[52] Simon Butler, Jonas Gamalielsson, Björn Lundell, Per Jonsson, Johan Sjöberg, Anders Mattsson, Niklas Rickö, Tomas Gustavsson, Jonas Feist, Stefan Landemoo, et al. An investigation of work practices used by companies making contributions to established oss projects. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice, pages 201--210. ACM, 2018. [ bib ]
[53] Gregorio Robles, Jesus M Gonzalez-Barahona, and Israel Herraiz. An empirical approach to software archaeology. In Proc. of 21st Int. Conf. on Software Maintenance (ICSM 2005), Budapest, Hungary, pages 47--50, 2005. [ bib ]
[54] Chandrasekar Subramaniam, Ravi Sen, and Matthew L Nelson. Determinants of open source software project success: A longitudinal study. Decision Support Systems, 46(2):576--585, 2009. [ bib ]
[55] Kim Herzig, Sascha Just, and Andreas Zeller. It's not a bug, it's a feature: How misclassification impacts bug prediction. In Proceedings of the 2013 International Conference on Software Engineering, ICSE '13, pages 392--401, Piscataway, NJ, USA, 2013. IEEE Press. [ bib | http ]
[56] GNU Emacs Manuals. https://www.gnu.org/software/emacs/manual/. [Online]. [ bib ]
[57] How Does Open Source Die?, O'Reilly. https://www.oreilly.com/library/view/open-source-for/0596101198/ch01s07.html. [Online]. [ bib ]
[58] Linux Kernel Github Repository. https://github.com/torvalds/linux. [Online]. [ bib ]
[59] Linux Kernel Official Website. https://www.kernel.org. [Online]. [ bib ]
[60] C Coverage Test Tool - Website. http://semdesigns.com/Products/TestCoverage/CTestCoverage.html. [Online]. [ bib ]
[61] SciTools Understand - Website. https://scitools.com/features/. [Online]. [ bib ]
[62] PHPCoverage Tool - Website. http://phpcoverage.sourceforge.net/. [Online]. [ bib ]
[63]
							@article{
							  martin1994oo,
							  title={},
							  author={Martin, Robert},
							  journal={An analysis of dependencies},
							  volume={12},
							  pages={151--170},
							  year={1994}
							}
							
[64]
							@article{chidamber1994metrics,
							  title={A metrics suite for object oriented design},
							  author={Chidamber, Shyam R and Kemerer, Chris F},
							  journal={IEEE Transactions on software engineering},
							  volume={20},
							  number={6},
							  pages={476--493},
							  year={1994},
							  publisher={IEEE}
							}
							

This file was generated by bibtex2html 1.99.

...