▲ Adema, Janneke, and Gary Hall. 2015. Really, We’re Helping to Build This . . . Business: The Academia.Edu Files. https://liquidbooks.pbworks.com/w/page/106236504/The%20Academia_edu%20Files
In this compilation of files, 11 authors critique Academia.edu and other dot-com sites that present themselves as open access platforms while taking investments from venture capitalists they will eventually have to pay back. The authors see Academia.edu as an extremely influential company that is poised to collect—and potentially exploit—a tremendous amount of data. They argue that Academia.edu’s efficacy and longevity may ultimately be compromised by its profit-driven model and the kinds of actions that such a model can engender. The authors see sites like Academia.edu as a problematic form of academic social media in which increased user participation builds social capital within a network of competition for recognition, often addressing this function in terms of the Foucauldian “technologies of the self.” Finally, the authors gesture toward theorizing a more economically progressive academic social networking site that does not rely on venture capital investments, heralding the idea of commonly owned, open-source, and economically and socially progressive social media, which seeks to nurture/interact with scholarly research as a commons.
¤ Arbuckle, Alyssa. 2019. “Open+: Versioning Open Social Scholarship.” KULA: Knowledge Creation, Dissemination, and Preservation Studies 3 (February): 18. https://doi.org/10.5334/kula.39
Advocates of the Open Access movement have been fighting for free and unfettered access to research output since the early 1990s. Open access is a crucial element of a fair, efficient scholarly communication system where all are able to find, interpret, and use the results of publicly-funded research. Arbuckle argues that versioning scholarship across varying modes and formats would move scholarly communication from a straightforward open access system to a more engaging environment for multiple communities. Universal open access is more possible now than ever before, thanks to networked technologies and the development of open scholarship policies. But, Arbuckle asks, what happens after access to research is provided?
CFI (Canada Foundation for Innovation). 2015. “Developing a Digital Research Infrastructure for Canada: The CFI Perspective.” https://www.innovation.ca/sites/default/files/Funds/cyber/developing-dri-strategy-canada-en.pdf
This report by the Canada Foundation for Innovation (CFI) offers an overview of Canada’s existing digital research infrastructure (DRI), presents its vision of an evolving digital research infrastructure ecosystem, and recommends actions for achieving it. The report argues that, as research in all disciplines becomes increasingly data-driven, the need for a well-developed digital research infrastructure ecosystem is increasingly clear. The evolving research environment has led to shifting paradigms about digital research infrastructure, including shifting from an emphasis on individual datasets, initiatives, and digital platforms to collective, collaborative, and distributed infrastructures. The Canada Foundation for Innovation’s envisioned digital research infrastructure is distributed and responsive to users’ needs, with robust governance and stable funding to ensure sustainability. Its recommended actions emphasize planning activities related to integrating existing components, technical investments, and user requirements and developing a coordinated approach to achieving those planned outcomes, including through the training of highly qualified personnel.
▲ Duffy, Brooke Erin, and Jefferson Pooley. 2017. “‘Facebook for Academics’: The Convergence of Self-Branding and Social Media Logic on Academia.Edu.” Social Media + Society 3. https://doi.org/10/1177/2056305117696523
Duffy and Pooley describe the growing trend of academics engaging in self-promoting practices such as creating and maintaining a brand for themselves, especially in the digital world. They analyze Academia.edu’s Silicon Valley startup aesthetic and business model, as well as the social culture manifested on the site, in order to argue that the popularity of scholarly social networking sites such as Academia.edu is tied to—but also compounds—the intense pressures academics face to market themselves and their work. Duffy and Pooley pinpoint the social media characteristics of Academia.edu, such as the prioritization of feedback mechanisms, analytics, and user-generated materials. These elements contribute to the authors’ assertions that Academia.edu goes hand-in-hand with unhealthy aspects of academic culture within the neoliberal university. The authors conclude with a warning about the risks of sites like Academia.edu causing academics to internalize market pressures, as well as Academia.edu’s guise as a for-profit company with enormous collections of user data and membership barriers to access while purporting to be a viable path to an open access future.
+ Hiebert, Matthew, William R. Bowen, and Raymond Siemens. 2015. “Implementing a Social Knowledge Creation Environment.” Scholarly and Research Communication 6 (3). https://doi.org/10.22230/src.2015v6n3a223
Hiebert, Bowen, and Siemens introduce Iter Community, a public-facing web-based platform prototyped by the Electronic Textual Cultures Lab and Iter: Gateway to the Middle Ages and Renaissance, with a specific focus on how this platform is geared toward facilitating social knowledge creation. The authors argue that the emerging area of research known as social knowledge creation promotes critical interventions into the more conventional processes of academic knowledge productions. This type of research is increasingly made more convenient by emerging technologies that allow research groups to more actively participate in and contribute to the dissemination of their work and communication with other partners. The Iter Community page is meant as a critical intervention into modes of scholarly production and publication, and models how the implementation of functionalities that support social knowledge creation can facilitate novel research opportunities and invite scholars and members of the community to participate in the creation of knowledge. The platform facilitates online knowledge production and dissemination in ways that ultimately enhance research practices and community outreach.
* Jones, Christopher. 2015a. “Institutional Supports for Openness.” In Networked Learning: An Educational Paradigm for the Age of Digital Networks, 124–26. Cham, Switzerland: Springer. https://link.springer.com/content/pdf/10.1007%2F978-3-319-01934-5.pdf
Jones asserts that in the interest of their development and sustainability, open educational resources (OERs) should be provided with varied institutional support, including on technical and funding fronts. Moreover, considering the large amount of information offered by OERs, universities have a responsibility to guarantee the quality of the materials they produce. Jones is afraid that without proper university support, OERs could become controlled by a technological elite and the business sector. As a result, they would lose their main purpose. He claims that OERs still have requirements for institutional structures, even taking into account the fact that they can replace, reform, or create new institutional approaches.
¤ Lovett, Julia, Andrée Rathemacher, Divana Boukari, and Corey Lang. 2017. “Institutional Repositories and Academic Social Networks: Competition or Complement? A Study of Open Access Policy Compliance vs. ResearchGate Participation.” Journal of Librarianship and Scholarly Communication 5 (1): eP2183. https://doi.org/10.7710/2162-3309.2183
Lovett, Rathemacher, Boukari, and Lang set out to compare whether faculty members at the University of Rhode Island deposit their work more with ResearchGate or with the institution’s own repository. To do so, the authors carry out a population study and survey of over 500 faculty members. They find that scholars who are prone to depositing with one system will likely deposit with another. As such, Lovett et al. argue, librarians should not consider social networking sites as competition for institutional open access repositories. The authors do, however, suggest that further education still needs to be done about what open access policies entail, the difference between commercial and academic repositories, and the benefits of publishing open access.
¤ Neylon, Cameron. 2017b. “Sustaining Scholarly Infrastructures through Collective Action: The Lessons That Olson Can Teach Us.” KULA: Knowledge Creation, Dissemination, and Preservation Studies 1 (1): n.p. https://kula.uvic.ca/index.php/kula/article/view/120
Neylon meditates on how best to enable sustainability for largescale scholarly communication infrastructure. Neylon’s goal is to explore “how we can sustain shared platform systems that support scholarly communities through the collection, storage, and transmission of shared resources” (n.p.), and he provides a number of examples of scholarly infrastructures and their funding models. Comparing these intitiatives within the context of political economy literature, Neylon comes to the conclusion that a smaller governance body coordinating the development of scholarly infrastructure would be valuable, as it would put into place a reasonable number of decision makers to guide collective action. Since scholarly communication encompasses so many thousands of institutions and actors, it would be nearly impossible to coordinate largescale infrastructure otherwise. In this way, a sustainable and sustaining infrastructure could be developed—one that is relative and responsive to fluctuations in the size and scope of the community.
+ Ridley, Michael, Clare Appavoo, and Sabina Pagotto. 2015. “Seeing the Forest and the Trees: The Integrated Digital Scholarship Ecosystem (IDSE) Project of the Canadian Research Knowledge Network (CRKN).” Association of College and Research Libraries Conference 7. http://hdl.handle.net/11213/17907
Ridley, Appavoo, and Pagotto present the Integrated Digital Scholarship Ecosystem (IDSE) project of the Canadian Research Knowledge Network (CRKN). The Canadian Research Knowledge Network is a partnership of 75 Canadian universities working toward increasing digital content for research in Canada, which has a significant impact on Canadian research and academic libraries. This article relates the findings of a study on digital scholarship within these institutions, focusing specifically on the first phase of the Integrated Digital Scholarship Ecosystem, an initiative dedicated to advancing research in Canada by exploring the state of the digital landscape. Some important issues that are addressed in phase one are the role of the library, preservation, development of a research agenda, promotion and tenure, and ways to address and sustain the voice of a community. The Integrated Digital Scholarship Ecosystem, according to the authors, is an ecosystem that helps point out the areas of digital scholarship that need to be addressed.
ζ Winter, Caroline, Tyler Fontenot, Luis Meneses, Alyssa Arbuckle, Ray Siemens, and the ETCL and INKE Research Groups. 2020. “Foundations for the Canadian Humanities and Social Sciences Commons: Exploring the Possibilities of Digital Research Communities.” Pop! Public. Open. Participatory 2 (October). https://popjournal.ca/issue02/winter
This paper introduces the Canadian Humanities and Social Sciences (HSS) Commons, an open online space where Canadian humanities and social sciences researchers and stakeholders can gather to share information and resources, make connections, and build community. Situated at the intersection of the fields of digital scholarship, open access, digital humanities, and social knowledge creation, the Canadian HSS Commons is being developed as part of a research program investigating how a not-for-profit, community-partnership research commons could benefit the humanities and social sciences community in Canada. This paper considers an intellectual foundation for conceptualizing the commons, its potential benefits, and its role in the Canadian scholarly publishing ecosystem; it explores how the Canadian HSS Commons’ open, community-based platform complements existing research infrastructure serving the Canadian humanities and social sciences research community.
+ Kitchin, Rob, Sandra Collins, and Dermot Frost. 2015. “Funding Models for Open Access Digital Data Repositories.” Online Information Review 39 (5): 664–81. https://doi.org/10.1108/OIR-01-2015-0031
Kitchin, Collins, and Frost outline financial models for digital open access repositories that are not funded by a core source. The authors create a list of challenges to open access, including Christine Borgman’s “dirty little secret”: despite the promotion of open data sharing, not much sharing is actually taking place. The authors propose that creating open access data repositories is not enough for attitudes in academia to change; substantial cultural changes in research practices must take place, and researchers should be encouraged to deposit their data as they complete research. The survey covers 14 potential funding streams for open access research data repositories. The authors argue that the lack of full, core funding and a direct funding stream through payment-for-use pose considerable financial challenges to the directors of such repositories. The collections that are maintained without funding are in significant danger of being lost to bit rot and other technological challenges.
¤ Shearer, Kathleen. 2011. Comprehensive Brief on Open Access to Publications and Research Data for the Federal Granting Agencies. Ottawa: Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council of Canada. https://policycommons.net/artifacts/1229984/comprehensive-brief-on-open-access-to-publications-and-research-data-for-the-federal-granting-agencies/1783057/
Shearer reviews what other nations with similar economic conditions to Canada are doing to support open access. In particular, she looks at Australia, the Netherlands, the United Kingdom, and the United States (although she also considers pan-European activities at times). According to Shearer, the United Kingdom has dedicated the most resources to this area and is well on its way to building a robust open access infrastructure. By contrast, Canada has expressed goodwill in this area but lacks tangible, coordinated, national policy, legislation, or infrastructure. Overall, Shearer cites the economic possibilities of open access, as well as the progressive policies and supportive activities of other countries and the importance of informed consumers alongside informed citizens.
+ Tanenbaum, Greg. 2014b. “North American Campus-Based Open Access Funds: A Five Year Progress Report.” SPARC. https://sparcopen.org/wp-content/uploads/2016/01/OA-Fund-5-Year-Review.pdf
Tanenbaum provides an overview of the successes and challenges of campus-based open access funds across North America. The report provides quantitative data to show how the funds have encouraged authors to get involved with open access publishing. It also includes a qualitative analysis of the successes, challenges, level of satisfaction, and communication with faculty and administration. The author notes that launching funds at more institutions will highlight the impact of this mechanism on scholarly communication and adds that “SPARC anticipates an ongoing involvement in campus-based Open Access Funds” (5).
+ Corrall, Sheila, Mary Anne Kennan, and Waseem Afzal. 2013. “Bibliometrics and Research Data Management Services: Emerging Trends in Library Support for Research.” Library Trends 61 (3): 636–74. https://doi.org/10.1353/lib.2013.0005
Corrall, Kennan, and Afzal analyze current trends in library support for research. University and college administrators and funding bodies are increasingly questioning the value of academic libraries, especially as the web becomes more accessible and user-friendly. According to the authors, e-research should provide libraries with the impetus to extend their services beyond the material archive. Libraries in the United States, such as the Massachusetts Institute of Technology’s (MIT) libraries, are quicker to adapt to digital services, and the Association of Research Libraries in 2009 found 21 libraries that already provide infrastructure or support for e-science and another 23 that intend to do so. The authors conducted a questionnaire that asked respondents questions about their organizations, bibliometrics, research data management, and future plans. Corrall, Kennan, and Afzal suggest that academic librarians involved in research support need to understand governmental and institutional research agendas so that they can support strategy and policy development and implementation.
+ Gargouri, Yassine, Chawki Hajjem, Vincente Larivière, Yves Gringas, Les Carr, Tim Brody, and Stevan Harnad. 2010. “Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research.” PLoS ONE 5 (10): n.p. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0013636
Gargouri, Hajjem, Larivière, Gringas, Carr, Brody, and Harnad compare the relative impact of open access and non-open access articles that are archived in a repository because of mandate or due to self-selection. Gargouri et al.’s goal is to test the self-selection bias hypothesis, in regard to what is known as the Open Access Advantage, and they cite the various articles that purport this hypothesis. The Open Access Advantage is the tested proof that archiving a research article in an open access repository increases its citation count. Those who argue that a self-selection bias is at play contest that these articles are cited more frequently because authors primarily archive their best or most citable work (and so these articles would be cited more often anyway). Based on their findings, the authors conclude that self-selected archived articles (author choice) are not necessarily cited more often than mandated archived articles (no author choice). Thus, the self-selection bias hypothesis does not apply.
McKiernan, Erin C., Lesley A. Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T. Niles, and Juan Pablo Alperin. 2019. “Use of the Journal Impact Factor in Academic Review, Promotion, and Tenure Evaluations.” PeerJ Preprints 7: e27638v2. https://doi.org/10.7287/peerj.preprints.27638v2
McKiernan et al analyze the review, promotion, and tenure (RPT) documents from institutions across the United States and Canada in order to determine how the journal impact factor (JIF) is employed in review, promotion, and tenure policies and processes. They explain that the journal impact factor was developed as a metric for libraries to make decisions about which journals to subscribe to, and that its continued use as a proxy for journal quality is problematic, including when it is used in review, promotion, and tenure evaluations. The authors found that a large majority of references to the journal impact factor in the review, promotion, and tenure documents support its use and consider it as a marker of the quality of the journal and, by extension, the publication or, less frequently, of prestige. They argue that the frequency with which these documents promote the journal impact factor shows that review, promotion, and tenure evaluation processes need improvement and note that, in addition to the evidence from these documents, the journal impact factor also plays a role in informal policy and culture as well.
¤ Pooley, Jefferson. 2017. “The Impact Platform.” Parameters: Knowledge Under Digital Conditions (blog). http://parameters.ssrc.org/2017/01/the-impact-platform/
Pooley weighs the pros and cons of the recent development of websites like The Conversation, which showcase scholarly pieces written for a non-specialist audience. Pooley argues that this sort of publication venue is flawed and problematic because it encourages the data-driven reliance on quantitative metrics to judge the value of academic work. Many feel as though this turn toward metrics is unfair, as access and usage metrics do not accurately portray the depth of engagement or intellectual impact on a reader. Pooley measures this drawback against the value of so-called impact platforms as successful arbiters of open scholarship. He also lauds the open licensing and emphasis on reuse/republication of The Conversation as an example of positive open access mechanisms that encourage wide dissemination of and engagement with academic work.
Wang, Xianwen, Chen Liu, Wenli Mao, and Zhichao Fang. 2015. “The Open Access Advantage: Considering Citation, Article Usage and Social Media Attention.” Scientometrics 103 (2): 555–64. https://doi.org/10.1007/s11192-015-1547-0
Wang et al. analyze difference in engagement rates between open access articles and non-open access articles published in Nature Communications, considering citation rates as well as altmetrics including article views, mentions on social media (Twitter and Facebook), and article downloads. Noting that previous studies about the open access citation advantage had widely differing findings, they explain that the increased availability of usage data, including monthly and daily or dynamic usage data, has enabled analysis of engagement with articles over time. Their findings confirm an open access citation advantage when considering static and dynamic usage data, and show that this difference increases steadily, indicating that there is a persistent open access advantage for usage over time.
¤ Boon, Marcus. 2014. “From the Right to Copy to Practices of Copying.” In Dynamic Fair Dealing: Creating Canadian Content Online, edited by Rosemary J. Coombe, Darren Wershler, and Martin Zeilinger, 56–64. Toronto: University of Toronto Press.
Boon argues in favour of copying and suggests that the current interpretation of intellectual property is far too restrictive. The author suggests that intellectual property and copying should not be regulated by private organizations like Access Copyright, and he argues that such organizations act as proxies for the Canadian publishing industry that funds them. Rather, Boon counters, copying is a key cultural practice that should be encouraged and facilitated, not criminalized.
¤ Lawson, Stuart. 2017. “Access, Ethics, and Piracy.” Insights 30 (1): 25–30. http://doi.org/10.1629/uksg.333
Lawson briefly explores the phenomenon of academic piracy, or the sharing of copyrighted, toll access research on sites like SciHub or aaaaarg. They rely on the historical framework that Adrian Johns lays in his book Piracy to reinforce the idea that intellectual property is not a natural or necessary state, and only came about in response to the rampant copying of books in eighteenth-century England. Although Lawson isn’t anti-piracy, per se (suggesting that sometimes what is ethical is not the same as what is legal), they do argue that open access is a way to share scholarship widely and legally. Lawson credits academic pirating sites for proving that the current, restrictive scholarly communication system is no longer fit for purpose: open access offers a legal alternative.
¤ Sale, Arthur, Marc Couture, Eloy Rodrigues, Les Carr, and Stevan Harnad. 2014. “Open Access Mandates and the ‘Fair Dealing’ Button.” In Dynamic Fair Dealing: Creating Canadian Content Online, edited by Rosemary J. Coombe and Darren Wershler, 189–200. Toronto: University of Toronto Press.
Sale et al. explore one approach to implementing open access practices and policies: the fair dealing button. The fair dealing button is an add-on to institutional repositories developed out of the longstanding tradition of individuals writing to authors to request a copy of their article. When authors deposit a closed-access article in a repository, all that is available for others to see is the article’s metadata. With a fair dealing button, users can still review the metadata but can also generate an automated request to the author for access to the full article. Even if an article is closed-access, the fair dealing button provides the reader with a single copy of the article for personal, research, creative, or journalistic use. Sale et al. call this process “almost open access” (191), and they argue that the fair dealing button makes open access mandates more feasible. For the authors, this is an effective, streamlining strategy for green open access and repository deposit more generally: institutions could mandate that all articles are deposited upon or just prior to publication—regardless of their status as open, closed, or embargoed—as any closed access copies could be made available upon request.
+ Snijder, Ronald. 2015. “Better Sharing Through Licenses? Measuring the Influence of Creative Commons Licenses on the Usage of Open Access Monographs.” Journal of Librarianship and Scholarly Communication 3 (1): 1187. https://doi.org/10.7710/2162-3309.1187
Snijder measures the influence of Creative Commons licenses on the usage of open access monographs. He suggests that there is, in fact, no evidence that making books available under open access licenses results in more significant download numbers than personal use licenses. For Snijder, the application of open licenses to books cannot, on its own, result in more downloads. Open licenses pave the way for other intermediaries to offer new discovery and aggregation services. Snijder’s study breaks away from the tradition of work on open licenses by measuring the effects of free licenses; he focuses on the implications of freely licensing open access monographs as opposed to discussing the legal frameworks surrounding copyright law and the Creative Commons.
Brown, Josh. 2020. “Developing a Persistent Identifier Roadmap for Open Access to UK Research.” Jisc. https://repository.jisc.ac.uk/7840/
Brown provides an overview of the use and provision of persistent identifiers (PIDs) in the United Kingdom, outlines areas in which new persistent identifiers are needed, and makes recommendations for a national persistent identifier strategy to support open scholarship. He notes that these identifiers are essential linking structures in the open digital research ecosystem, citing their inclusion in policies and standards such as the FAIR (Findable, Accessible, Interoperable, Reusable) principles and Plan S. The report outlines model persistent identifier-optimized workflows for gold and green open access publication, demonstrating how they can automate information transfer during the publication workflow, thereby reducing administrative burdens that may stand as a barrier to open publication. It notes that, although adopting a single persistent identifier provider for each entity (e.g., ORCID iDs for researchers, digital object identifierss for publications) is preferable from an information management standpoint, there are risks in being dependent on a single provider, particularly commercial providers. These risks can be mitigated through community-led, transparent organizational governance and sustainable business models. The report recommends the development of a UK PID Consortium, modelled on the UK ORCID Consortium, that strategically optimizes the most widely used persistent identifierss, supports the adoption and ensures the sustainability of emerging ones, and supports the creation and adoption of new high-priority persistent identifiers. It concludes that, although developing a national strategy for persistent identifiers that meets the needs of the national research community working in an international context is challenging, working toward the shared goal of facilitating open scholarship provides focus to these efforts.
Haak, Laurel L., Alice Meadows, and Josh Brown. 2018. “Using ORCID, DOI, and Other Open Identifiers in Research Evaluation.” Frontiers in Research Metrics and Analytics 3. https://doi.org/10.3389/frma.2018.00028
Haak, Meadows, and Brown discuss the benefits of using open persistent identifiers (PIDs) in research evaluation. Noting that research evaluation involves comparing a project’s goals to its outcomes and that finding this data manually can be very labour intensive, they argue that using open identifiers can make this process more effective, efficient, and reliable. Two challenges involved in research evaluation include a lack of clear project goals and a lack of available data: journal citation data is often the only data used. Taking a more expansive view of research success requires building infrastructure that allows for the identification of various types of research products as well as the people, organizations, and other identities involved. Some of these exist already, such as ORCID iDs for researchers and digital object identifiers for publications, but the more entities with persistent identifiers, the richer the data can be. Although this component of the digital research infrastructure is still in development, research evaluators can explain its value and advocate for its development.
Marshall, Kelli. 2015. “How to Curate Your Digital Identity as an Academic.” The Chronicle of Higher Education (blog). January 5. https://www.chronicle.com/article/how-to-curate-your-digital-identity-as-an-academic/
Marshall offers advice to academics about creating and controlling their online identities. She argues that intentionally creating an online identity is crucial, particularly for those on the job market or applying for tenure. Marshall advises that all academics create a professional website and add its link in their social media profiles in order to build online networks. She also suggests using a consistent avatar and signature with contact information across platforms and keeping the content and look of these up to date. Finally, she recommends monitoring this digital identity regularly, and notes that curating an online identity is an important way for academics to demonstrate their technical skills.
ORBIT Funder Working Group. 2019. “ORCID and Grant DOIs: Engaging the Community to Ensure Openness and Transparency of Funding Information.” https://doi.org/10.23640/07243.9105101.v1
This document outlines the benefits of persistent identifiers (PIDs) in interactions in grant funding workflows, specifically ORCID iDs to identify individuals and Crossref digital object identifiers for grants. Based on the tenets that persistent identifiers enable the information associated with funding to be managed accurately, efficiently, and transparently, the document presents four possible workflow scenarios with varying degrees of accuracy and effort from all parties involved. Its recommended workflow is for funders to collect ORCID iDs as part of their grant application, register digital object identifiers for grants through Crossref, and attach researchers’ ORCID iDs to those Crossref records. The document concludes with some recommendations for different stakeholder groups: researchers should use and share their ORCID iDs when applying for grants and publishing research, funders should collect ORCID iDs and register digital object identifiers for grants, Crossref and other indexers should facilitate linkages between persistent identifiers, publishers should collect and use ORCID iDs and grant digital object identifiers, and ORCID should facilitate these processes.
Allison-Cassin, Stacy, and Dan Scott. 2018. “Wikidata: A Platform for Your Library’s Linked Open Data.” The Code4Lib Journal 40 (May). https://journal.code4lib.org/articles/13424
Allison-Cassin and Scott discuss the possibilities that Wikidata offers in terms of linked open data (LOD) for GLAM institutions (galleries, libraries, archives, and museums), noting that, although the value of linked open data for libraries has long been recognized, technical and infrastructural barriers have stood in the way of widespread engagement. Wikidata, a project of the Wikimedia Foundation, removes many of these barriers as an open platform for creating and using linked open data. The authors provide an overview of how Wikidata works, drawing on their experiences using it, and discuss its applications for community outreach and library systems. They conclude by pointing out that, because Wikidata’s linked open data is created by the community, using it can help address some of the systemic biases built into existing description and metadata structures.
ARL Task Force on Wikimedia and Linked Open Data. 2019. “ARL White Paper on Wikidata: Opportunities and Recommendations.” Report. Association of Research Libraries. https://apo.org.au/node/254221
The Association of Research Libraries (ARL) presents an overview of Wikidata’s history, current applications, and potential applications in research libraries, as well as a set of recommendations for libraries and librarians for fostering engagement or engaging with it. The authors argue that, although Wikidata cannot and should not replace library systems, it is a low-barrier solution for some of the challenges libraries face in integrating linked open data into their systems. The report’s recommendations include experimenting with Wikidata and building engagement with the Wikidata community as first steps, as well as the development of pilot projects that use the platform to create and share linked open data from local projects. Other recommendations include hosting Wikimedians in Residence and providing support for librarians, researchers, and community members—including underrepresented groups—to engage in these efforts.
+ Brown, Susan, and John Simpson. 2015. “An Entity by Any Other Name: Linked Open Data as a Basis for a Decentered, Dynamic Scholarly Publishing Ecology.” Scholarly and Research Communication 6 (2). https://doi.org/10.22230/src.2015v6n2a212
Brown and Simpson propose that linked open data enables more easily navigable scholarly environments that permit better integration of research materials and greater interlinkage between individuals and institutions. They frame linked open data integration as an ecological problem in a complex system of parts and relationships. The different parts of the ecology co-evolve and change according to the relationships in the system. The authors suggest that tools are needed for establishing automated conditions; for evaluating the provenance, authority, and trustworthiness of linked open data resources; and for developing tools that facilitate corrections and enhancements. The authors explain that an ontology negotiation tool would be a most valuable contribution to the Semantic Web. Such a tool would represent an opportunity for collaboration between different sectors of the knowledge economy and would allow the Semantic Web to develop as an evolving space of knowledge production and dissemination.
Byrne, Gillian, and Lisa Goddard. 2010. “The Strongest Link: Libraries and Linked Data.” D-Lib Magazine 16 (11/12). https://doi.org/10.1045/november2010-byrne
Byrne and Goddard survey the potential benefits and challenges of linked data for libraries and offer some recommendations for libraries to engage in developing the semantic web. Citing interoperability and federated searching as key benefits, they note that, although some technical barriers exist, the most significant barriers are not technical but cultural and informational, including a lack of understanding of what linked data is and what it can offer libraries, and concerns related to privacy and rights management. The authors conclude by noting that changes in the area of resource description are already underway, which will facilitate the adoption of linked data, and that libraries and librarians can keep pushing things forward by developing small-scale linked data projects and local controlled vocabularies, engaging with library and linked data communities, and advocating for linked data to vendors, including linked open data.
Crompton, Constance, Lori Antranikian, Ruth Truong, and Paige Maskell. 2020. “Familiar Wikidata: The Case for Building a Data Source We Can Trust.” Pop! Public. Open. Participatory 2 (October). https://popjournal.ca/issue02/crompton
Crompton, Antranikian, Truon, and Maskell introduce Wikidata as a source of structured humanities data and discuss a case study of its application. They make a case for the value of Wikidata and for humanities researchers to get involved in its development, noting that their expertise is needed to add data about entities as well as the ontological relationships among them. The authors describe their method for extracting biographical data from Bartlett’s Familiar Quotations and contributing it to Wikidata as part of the Linked Familiarity project. They conclude that working with Wikidata is not without its challenges, but that it offers a low-barrier opportunity for humanities researchers to use and create linked open data through open social scholarship.
+ Akers, Katherine G., and Jennifer Doty. 2013. “Disciplinary Differences in Faculty Research Data Management Practices and Perspectives.” International Journal of Digital Curation 8 (2): 5–26. http://ijdc.net/index.php/ijdc/article/view/263
Akers and Doty conduct a survey on disciplinary differences in faculty research data management practices and perspectives. The authors divide faculty members into four broad research domains: arts and humanities, social sciences, medical sciences, and basic sciences. The percentages of faculty per area are considered, as well as attitudes toward open access data and familiarity with basic terms of data management. The survey also seeks to understand faculty attitudes toward digital documentation and preservation. Both authors worked to create Shibboleth authentication access for Emory University researchers to the DMPTool that walks researchers through the creation of data management plans for grant proposals. The authors also point out that OpenEmory, the current institutional repository, does not warrant further research data development and that more effort could be focused on facilitating the deposit of data in disciplinary repositories or setting up instances of the Dataverse Network. Serious consideration of both similarities and dissimilarities among disciplines can guide academic librarians in the development of a range of data management-related services.
Baker, David, Donna Bourne-Tyson, Laura Gerlitz, Susan Haigh, Shahira Khair, Mark Leggott, Jeff Moon, Chantel Ridsdale, Robbin Tourangeau, and Martha Whitehead. 2019. “Research Data Management in Canada: A Backgrounder.” Zenodo. https://doi.org/10.5281/ZENODO.3341596
Baker et al. survey the research data management (RDM) ecosystem in Canada alongside the research lifecycle, discuss the importance of good data management, and consider some challenges facing the Canadian research community. The report argues that robust, coordinated research data management policies and practices are essential for realizing the promise of Canadian research data, as well as digital research infrastructure, including data repositories. Although community-led organizations including the Canadian Association of Research Libraries (CARL) and Research Data Canada (RDC) have led the way in building a research data management community of practice, national coordination is necessary to ensure the equitable access and sustainability of resources across the country. Although Canada has the opportunity to become a leader in resource data management, its challenges include the lack of stable and secure funding, lack of national coordination, and the need for cultural change toward recognizing and valuing resource data management.
+ Fear, Kathleen. 2011. “‘You Made It, You Take Care of It’: Data Management as Personal Information Management.” International Journal of Digital Curation 6 (2): 53–77. http://www.ijdc.net/index.php/ijdc/article/view/183
Fear’s article explores data management at the University of Michigan, investigates the factors that have shaped the practices of researchers, and seeks to understand the motives for extending or inhibiting changes in data management practices. She argues that institutions should have an interest in protecting the data of their researchers. For Fear, improving infrastructure for data sharing and accessibility is one way of improving data management standards. She conducts a survey with questions such as whether the researcher believes data to be personal information, how researchers manage their data over the short term, what kind of data management plans are provided when researchers apply for funding, what are the methods for preserving data over the long term, and the extent of their general familiarity with the basics of data management. The study concludes with the observation that data management is part of a continuum of processes that tend to blur together as researchers move from document to document. According to Fear, researchers regard separating data management from other research activities as confusing and counterproductive.
+ Henty, Margaret, Belinda Weaver, Simon Bradbury, and Simon Porter. 2008. “Investigating Data Management Practices in Australian Universities.” APSR. http://eprints.qut.edu.au/14549/1/14549.pdf
Henty, Weaver, Bradbury, and Porter conduct a survey on changing expectations for the provision of data management infrastructure in Australian universities. Most of the respondents are academic staff, with significant postgraduate student participation and a low response rate from emeritus or adjunct professors. The questions asked of respondents are oriented toward researcher awareness of digital data, the types of digital data collected, the sizes of the data selections, the software used for analysis and manipulation of digital assets, and research data management plans. The questions also concern institutional responsibility and structure for data management, such as whether researchers outside the team are allowed to access shared research data, and how the data is accessed and used. Henty et al. compile data from the Queensland University of Technology, the University of Melbourne, and the University of Queensland.
+ Jones, Sarah, Alexander Ball, and Cuna Ekmekcioglu. 2008. “The Data Audit Framework: A First Step in the Data Management Challenge.” International Journal of Digital Curation 3: 112–20. https://doi.org/10.2218/ijdc.v3i2.62
Jones, Ball, and Ekmekcioglu provide a summary of their tool, the Data Audit Framework, which provides organizations with the means to identify, locate, and assess the current management of their research assets. The framework was designed to be applied without dedicated or specialist staff, making librarians suitable auditors for the program. Common issues plaguing data management at the institutional level are access to data storage, lack of awareness of data policy, and a lack of long-term legacy data preservation mechanisms. The authors argue that institutional data policies with guidance on best practices in data creation, management, and long-term preservation would greatly assist departments in maintaining digital assets. They then provide a list of organizations from which departments can receive advice on best practice and services that can equip postgraduates and department members with the support needed to produce sound data management plans. The Data Audit Framework identifies main data issues, including areas where data is at risk, and helps to develop solutions.
+ Krier, Laura, and Carly A. Strasser. 2013. Data Management for Libraries: A Lita Guide. Chicago: ALA TechSource.
Krier and Strasser’s guide to data management for libraries is intended for libraries that are in the early stages of initializing data management programs at their institutions. The opening chapters provide definitions of data management, different types of research data, curation, and lifecycle. The guide contains advice on how to start a new service and point-form questions to help the reader decide what kind of plan works best for their institution. The authors suggest identifying researchers who are receptive to working with the library and request assistance with data management plans or curation services. An overview of descriptive, administrative, and structural metadata is provided, along with an explanation of its role in data management. The differences between storage, preservation, and archiving are discussed, along with definitions of domains and institutional repositories. The authors then briefly describe the preservation process. The final chapters loosely cover access and data governance issues that have caused problems with data management in the past.
+ Lewis, M. J. 2010 “Libraries and the Management of Research Data.” In Envisioning Future Academic Library Services, 145–68. London: Facet Publishing. http://eprints.whiterose.ac.uk/11171/
Lewis begins his chapter by asking the rhetorical question of whether managing data is a job for university libraries. He argues that it is part of the role of the university library to help manage data as part of the global research knowledge base; however, the scale of the challenge requires concerted action by a range of stakeholders who are not all necessarily employees of the library. Lewis advises institutions to develop several policies for research data management that include developing library workforce data confidence, providing research data advice, developing research data awareness, teaching data literacy to postgraduate students, bringing data into undergraduate research-based learning, developing local data curation capacities, identifying required data skills with library and information science schools, leading local data policy development, and influencing national data policy. Non-trivial research funding is needed for these initiatives and should be funneled through a primary “pathfinder” phase of two years from major research councils. Lewis concludes with the observation that in order to develop such training, award-bearing programs (master’s-level training for data managers and career data scientists looking to pursue career track positions in data centres), short course accredited provision, and training for data librarians are needed.
+ Romary, Laurent. 2012. “Data Management in the Humanities.” ERCIM News (April). https://ercim-news.ercim.eu/images/stories/EN89/EN89-web.pdf
Romary describes several data management tools in the humanities. The first tool described is HAL, a multi-disciplinary open access archive for the deposition and circulation of scientific research documents, regardless of publication status. The author then shifts focus to the Digital Research Infrastructure for the Arts and Humanities (DARIAH) project, which aims to create a solid infrastructure to ensure the long-term stability of digital assets and the development of wide range services for the original tools. This project depends on the notion of digital surrogates, which can be metadata records, scanned images, digital photographs, or any kind of extract or transformation of existing data. A unified data landscape for humanities research would stabilize the experience of researchers in circulating their data. Laurent suggests that an adequate licensing policy must be defined to assert the legal conditions under which data assets can be disseminated and researchers involved with initiatives such as the Digital Research Infrastructure for the Arts and Humanities project need to converse with data providers on how to create a seamless data landscape.
+ Surkis, Alisa, and Kevin Read. 2015. “Research Data Management.” Journal of the Medical Library Association 103 (3): 154–56. https://doi.org/10.3163/1536-5050.103.3.011
Surkis and Read provide an introductory resource for librarians who have had little or no experience with research data management. Basic concepts are defined, such as the fluidity of data in process and analysis as well as the data lifecycle. The authors suggest that the line between publications and data is blurry, and that data management is essential in making data and publications discoverable. This, they argue, is a central task of the librarian. The authors then recommend the online course, MANTRA: Research Data Management Training, to introduce librarians and researchers to the topic.
+ Wilson, James A. J., Luis Martinez-Uribe, Michael A. Frazer, and Paul Jeffreys. 2011. “An Institutional Approach to Developing Research Data Management Infrastructure.” International Journal of Digital Curation 6 (2): 274–87. http://ijdc.net/index.php/ijdc/article/view/198
Wilson, Martinez-Uribe, Frazer, and Jeffreys suggest that the University of Oxford needs to develop a centralized institutional platform for managing data through all stages of their life cycle that mirrors the framework of the institution in its highly federated structure. The Bodleian Libraries are currently developing a data repository system (Databank) that promises metadata management and resource discovery services. Researchers are given the role of guiding and validating each strand of data development as projects progress. Institutional data management is favoured over the establishment of national repositories. The authors conclude with the suggestion that data management might be better placed in, or integrated with, cloud-based services that are implemented in institutions but do not belong to them.