Welcome to FoRC!‎ > ‎

Vision Statements

Vision statement
What is preventing us from realising this vision?
How can these impediments be overcome?
 Tim Clark et al.
 Nanopublications revisited
   See vision statement appended

Paul Groth 

To enable reusable scientific artifacts that allow all forms of science content "to count". 

Tools that make it easy to capture scientific content transparently and systematically. The major issue is how do we rigorously show that the new forms of research communication are effective. We need metrics.

Social commitment and killer apps. 

The Altmetrics Manifesto

W3C Provenance Working Group

The Value of Data - Nature Genetics

My Blog: http://thinklinks.wordpress.com 

Anita de Waard 

  • Create a form of the research paper that enables creation of and relationships between data and claims
  • Create a knowledge platform that allows the sharing of executable components as part of a submission
  • Authoring tools
  • Validation habits
  • Interoperable standards 
  • Collaborative tools and standards
  • Novel methods of validation
  • Platforms that allow the exchange of software components

Michael Kurtz 

Develop an integrated information environment for astronomy 

Funding structure of Journals/Libraries 

New organizations, search engines, data centers, archives 


 Fiona Murphy       
 Develop best practice for depositing data and datasets in repositories which:
  • enables links to relevant papers
  • has high compliance levels due to appropriate incentives, resources and policies
  •  Researchers aren't incentivised: e.g. too much work, fear of losing control of IP.
  • No one wants to assume responsibility for costs.
  • Mistrust of curation, re-use policies. 
  •  Closer collaboration with funders
  • find small-scale examples that work and lever them.
  • Make the process easier (automate where possible)
  • Publicise the benefits 
 David Rosenthal
 The content of communication about research will follow the same evolutionary path as general web content, from static to dynamic. Reproducibility will depend on our ability to preserve and re-perform executable objects or services. This will require the ability to re-construct the context in which they were executed, which will contain other executable objects as well as data objects that evolve through time.
 Current techniques for preserving static content such as journals turn out to be too expensive for the available resources. The content that is at risk is not being preserved, because expensive preservation can only be applied to expensive content which, because it is expensive, is not at significant risk.. Current techniques for preserving dynamic content are vastly more expensive still, and the business models that assign it value are mush less clear.
 In the current economic climate, large new expenditures are out of the question. We need to find ways of delivering dynamic content that make preserving it affordable. This requires understanding where the costs of preservation come from, and engineering them out of the system.
 Three blog posts relevant to business model discussions:
Interesting talks by VC & musician Roger McNamee and by Larry Lessig:
And everyone thinking about any form of publishing should read these 2009 posts:
Buckingham Shum
It will eventually be the case that a valid contribution to the literature may be a much finer-grained node than a whole document, once connected meaningfully to other nodes in the network. Or, a contribution may itself take the form of one or more new meaningful connections between existing nodes. This moves us away from the Gutenberg, papyrocentric-model of the literature, towards a more appropriate network-centric model for making knowledge-level claims, which supports digital services, eg. more effective tracking and interrogation of what is known, not known, and contested.We are trained from an early age to read and write ideas as linear documents, rather than as hypertext networks of ideas. The current generation of senior academics has poor technical skills.

The next generation is growing up thinking about information and knowledge more as networks. Meanwhile, the Web 2.0 paradigm is shifting mindset to understand the notion of the participatory web which can be annotated in many layers by humans and machines. Our research provides a new language for understanding the new hypertext literacy.

Digital Research Discourse? Computational Thinking Seminar Series, School of Informatics, University of Edinburgh,  25 Apr., 2007 [slides]

Modelling naturalistic argumentation in research literatures [paper]

Cohere: A prototype for contested collective intelligence [paper]

Research as Hypermedia Narrative [webcast]
 George Tsatsaronis / Michael Schroeder - Free text cannot be replaced, but enriched with triples and more formal statements
- Ontologies are important, but must be hidden from users
 It's a social process and academics are conservative when it comes to pushing their career following accepted measures such as impact factor etc. Academics have to change to push the important role of supplemtary data in standardised repositories, curation, download, etc. as new measures and tools. When assessing academics encourage novel measures

When reviewing articles or being editor insist on publication of the data

 Stefan Decker
 Scientific Information becomes part of a global, universal and explicit Network of Knowledge, where every person, event, claim, hypothesis, argument - every conceivable entity - is represented and it is clear how they are related to each other. Learning about a new topic requires not reading thousands of documents, but examining this network. Adding new scientific knowledge only requires adding nodes and relationships to this network.  Everybody can contribute to this network. The contribution is immediately accessible globally by others. Reviewing as well as a reputation mechanisms provide ways to filter information.

  •  Missing agreement on abstractions (claim, hypothesis, argument, etc.)
  • Missing reward mechanisms (career progression, recognition)
  • Missing reviewing, trust and reputation mechanisms 
  • Missing publishing tools
  • Missing access and filtering tools
 Establishing a new publishing paradigm is establishing  a socio-technical system. It is effectively building a socio-technical machine with new rules. Establishing it requires exploitation of  network effect and Metcalf's law and a long-term co-evolution between technology and society.

First step is to provide initial solutions for problems stated below. Build first islands. Connect these islands using bridges. Encourage new islands, connect them with more bridges. Continue this until the ocean is covered.

The life sciences seem to be perfect early adopters.
Metcalf's Law

David De RoureResearch "papers" are active objects which conduct science both on demand and autonomously, and self-curate. They are examples of "Computational Research Objects".This level of  (deep) technical interoperability is hard and there are socio-technical challenges in business models, the flow of intellectual rights and ethics in the context of automation. We treat data as a first class citizen but will need to give similar attention to methods.Intelligent infrastructure solutions, meaningful conversations with those who conduct research and also those who study it, due attention to business models and rights-flow, and events like FORC to stimulate co-evolution.Replacing papers:
Executable Journals:
 Cameron Neylon    
    See Position Paper
 Carole Goble
    See Position Paper
SelectionFile type iconFile nameDescriptionSizeRevisionTimeUser

Paper by Tim Clark, Paolo Ciccarese, Paul Groth, Steve Pettifer, Carole Goble  613k v. 3 Aug 17, 2011, 11:52 AM Anita de Waard

Position Paper Cameron Neylon  296k v. 3 Aug 13, 2011, 1:19 PM Anita de Waard
View Download
Position Paper Carole Goble  47k v. 3 Aug 13, 2011, 1:15 PM Anita de Waard