Research Remix

January 8, 2019

Still time to submit to OR2019; I’ll be keynoting!

Filed under: conferences, Uncategorized — Tags: — Heather Piwowar @ 6:44 am

There is still time submit talk proposal for Open Repositories 2019, till January 16th.   It’s going to be in Hamburg Germany this year, and there are fellowships available for those with financial need.

And I’ll be keynoting!  :)  Super looking forward to it…. repositories are at the center of so many exciting changes right now.

Anyway, details below.  I hope you can make it, it would be great to meet you and/or see you again!

Open Repositories

 

(more…)

January 3, 2019

How to read and write to a google spreadsheet from heroku using python

Filed under: Tech Tip, Uncategorized — Tags: , — Heather Piwowar @ 6:30 am

Hi all!  This post isn’t about scholcomm or open science so won’t be of interest to most people who used to read this blog back in the day, but I’d kinda like to get back into blogging more and I’m deciding the way to do that is just start blogging more to get the habit back.  Fingers crossed.  :)

Also, all the techies among us have benefitted enormously from others who write posts about their “been there, done this, it worked for me” so I’d like to start giving back a bit more on that front!

Here’s how to get python to read and write from a google spreadsheet, storing the creds in an environment variable so you can run it from heroku for example.

First, read this great post by GREG BAUGUES at Twillo because this solution is just a minor modification to his great instructions.  Go enable the API as he describes, get your credentials file, and don’t forget to go to your google spreadsheet and give permissions to the generated email address as he directs.

Then, set up an environment variable with the value of the contents of the credential file you created, in single quotes, like:

heroku config:add GOOGLE_SHEETS_CREDS_JSON='{
  "type": "service_account",
  "project_id": "my-project",
  "private_key_id": "slkdfjlksdfljksdf",
  "private_key": "-----BEGIN PRIVATE KEY-----\nlsdjflskjdfljsdjfls
...

}
'

 

Then use the code in this gist, replacing the url with the url of your spreadsheet.

Add gspread and oauth2client to your requirements.txt, push it all to github, and it should work!  You can test it with heroku run python google_sheets_from_heroku.py… it should print out the contents of the first sheet of your spreadsheet.

I got it working a day or two ago so the details are a bit hazy now, but I think at some point I was prompted to go to https://console.developers.google.com/ and also give permissions for the “Google Sheets API” in addition to the “Google Drive API” so heads up to try that if you run into any snags.

Anyway, hope that helps somebody!

Long live blogging!

December 11, 2018

Open Science Beers Vancouver: Dec 2018

Filed under: events — Heather Piwowar @ 3:50 pm

In Vancouver and love #OpenScience? Come meet up! Next #OpenScienceBeersYVR is coming up: Thursday December 13th at 6pm in Gastown: localgastown.com . Come hang out with #Scholcommlab, @Impactstory (meet our new team member!) and others! #OA #opendata

“The Local”
3 Alexander Street, Gastown.
localgastown.com
6pm
See you there! :)

November 4, 2018

open science beers vancouver

Filed under: events, openscience, Uncategorized — Heather Piwowar @ 10:59 am

Open Science Beers Vancouver has been declared!

Tuesday, November 6, 2018
6 PM – 9 PM
The Lido
518 East Broadway, Vancouver, British Columbia V5T 1X5
organized by Ente Hbayar

I’ll be there, come join us!

Would love to meet everyone doing open science in Vancouver!
Needless to say the beers part is optional, come even if you don’t want to drink beers :)
If you are interested but can’t come this time, let us know so we can gauge interest for future events.
Hope to see you there!

September 13, 2018

It’s time to insist on #openinfrastructure for #openscience

Filed under: openinfrastructure, openscience — Heather Piwowar @ 9:00 pm

It’s time.  In the last month there’ve been three events that suggest now is the time to start insisting on open infrastructure for open science:

The first event was the publication of two separate recommendations/plans on open science, a report by the National Academies in the US, and Plan S by the EU on open access.  Notably, although comprehensive and bold in many other regards, neither report/plan called for open infrastructure to underpin the proposed open science initiatives.

Peter Suber put it well in his comments on Plan S:

the plan promises support for OA infrastructure, which is good. But it never commits to open infrastructure, that is, platforms running on open-source software, under open standards, with open APIs for interoperability, preferably owned or hosted by non-profit organizations. This omission invites the fate that befell bepress and SSRN, but this time for all European research.

The second event was the launch of Google’s Dataset Search — without an API.

Why do we care?  Because of opportunity cost.  Google Scholar doesn’t have an API, and Google has said it never will.  That means that no one has been able to integrate Google Scholar results into their workflows or products.  This has had a huge opportunity cost for scholarship.  It’s hard to measure, of course, opportunity costs always are, but we can get a sense of it: within 2 years of the Unpaywall launch (a product which does a subset of the same task but with an open api and open bulk data dump), the Unpaywall data has been built in to 2000 library workflows, the three primary A&I indexes, competing commercial OA discovery services, many reports, apps of countless startups, and more integrations in the works.  All of that value-add was waiting for a solution that others could build on.

If we relax and consider the Dataset Search problem solved now that Google has it working, we’re forgoing these same integration possibilities for dataset search that we lost out on for so long with OA discovery.  We need to build open infrastructure: the open APIs and open source solutions that Peter Suber talks about above.

As Peter Kraker put it on Twitter the other day: #dontLeaveItToGoogle.

The third event was of a different sort: a gathering of 58 nonprofit projects working toward Open Science.  It was the first time we’ve gathered together explicitly like that, and the air of change was palatable.

It’s exciting.  We’re doing this.  We’re passionate about providing tools for the open science workflow that embody open infrastructure.  #OpenSciRoadmap

If you are a nonprofit but you weren’t at JROST last month, join in!  It’s just getting going.

 

So.  #openinfrastructure for #openscience.  Everybody in scholarly communication: start talking about it, requesting it, dreaming it, planning it, building it, requiring it, funding it.  It’s not too big a step.  We can do it.  It’s time.

 

ps More great reading on what open infrastructure means from Bilder, Lin, and Neylon (2015) here and from Hindawi here.

pps #openinfrastructure is too long and hard to spell for a rallying cry.  #openinfra??  help :)

September 3, 2018

Impactstory is hiring! Come work with us!

Filed under: Uncategorized — Heather Piwowar @ 8:46 am

We’re hiring our first developer!  Until now all the code behind Unpaywall, Impactstory Profiles, Depsy, and everything else we’ve done has been written by Jason (mostly frontend) and me (mostly backend) (with design of both parts done by us together).

But it is getting to be more than we can do ourselves, and we have money to hire, so we are really excited to expand the team!

Job ad is on the Impactstory blog, reposted here to make it easy.  Help spread the word so we can find the perfect person and they can find us :)


 

ABOUT US

We’re building tools to bring about an open science revolution.

Impactstory began life as a hackathon project. As the hackathon ended, a few of us migrated into the hotel hallway to continue working, completing the prototype as the hotel started waking up for breakfast. Months of spare-time development followed, then funding. That was five years ago — we’ve got the same excitement for Impactstory today.

We’ve also got great momentum.  The scientific journal Nature recently profiled our main product:  “Unpaywall has become indispensable to many academics, and tie-ins with established scientific search engines could broaden its reach.”  We’re making solid revenue, and it’s time to expand our team.

We’re passionate about open science, and we run our non-profit company openly too.  All of our code is open source, we make our data as open as possible, and we post our grant proposals so that everyone can see both our successful and our unsuccessful ones.  We try to be the change we want to see 🙂

ABOUT THE POSITION

The position is lead dev for Unpaywall, our index of all the free-to-read scholarly papers in the world. Because Unpaywall is surfacing millions of formerly inaccessible open-access scientific papers, it’s growing very quickly, both in terms of usage and revenue. We think it’s a really transformative piece of infrastructure that will enable entire new classes of tools to improve science communication. As a nonprofit, that’s our aim.

We’re looking for someone to take the lead on the tech parts of Unpaywall.  You should know some Python and be familiar with relational databases (we use PostgreSQL) and have plenty of experience programming.  But more importantly, we’re looking for someone who is smart, dedicated, and gets things done! As an early team member you will play a key role in the company as we grow.

The position is remote, with flexible working hours, and plenty of vacation time.  We are a small team so tell us what benefits are important to you and we’ll make them happen.

OUR TEAM

We’re at about a million dollars of revenue (grants and earned income) with just two employees: the two co-founders.  We value kindness, honesty, grit, and smarts. We’re taking our time on this hire, holding out for just the right person.

HOW TO APPLY

Sound like you?  Please email your resume, GitHub, and any work you’re particularly proud of to team@impactstory.org and we’ll get back to you.

May 6, 2018

Where’s Waldo with Public Access Links

Filed under: Uncategorized — Heather Piwowar @ 9:56 pm

Five years of blog silence!  Good news: I’m not dead!  :)  Working hard on Unpaywall these days.  Making no promises about any additional blogging, but what the heck, feeling it today, so here goes :)

Was digging into some publisher pages today, and I noticed a trend.  Links to Public Access author manuscripts that CHORUS says are publicly available thanks to funder mandates are often very difficult to actually find on publisher pages.  Want to see what I mean?

There is a “read for free” link on this page. Can you find it?
https://www.sciencedirect.com/science/article/pii/S0045653514009102

… hint: scroll down, far, ignoring the big sticky “Purchase PDF” at the top, to the very bottom of the page, past another subscription login and Purchase option, finally, to “View Open Manuscript”:

dch3moquqaajdxn

 

 

Another one. Can you find the free download link on this American Physical Society paper?  https://journals.aps.org/prd/abstract/10.1103/PhysRevD.94.052011 It’s there, but I bet most people wouldn’t find it unless they knew to look.

It’s in the margin, beside “SUBSCRIPTION REQUIRED”, under “Buy Article”, and under “Log in to Institution”. Nope, not “Available via CHORUS”, that’s takes you to the CHORUS website.  Under that.  Yup!  Pretty clear once you see it, it’s true, but is the Public really going to go looking for that when they don’t know to look?

 

3rlejyf

On this one though?  https://aip.scitation.org/doi/10.1063/1.4962501
On this one the publishers are *counting* on you to know that if you click on “CHORUS” you will get to a free version of the paper, because that’s the only label the free version is given.  If you naively think clicking “PDF” or “Full Text” would be the best way to get you to the full text, you just get these options… with no indication you can also read the author manuscript for free.

qszinub

Not very happy-making.

And the thing is the Author Manuscripts are peer-reviewed versions! They have all the same info as the PDFs these journals want to selling us, they are just missing the prettying up etc.  So Publishers aren’t doing an important quality gate-keeping role here — they are just making it harder than it needs to be for people to find free versions of articles, the free versions that funders have mandated be made available.  I have no problem then if they want to sell the Public a pretty PDF version.  Heck I might even buy it sometimes.  But let it be an informed decision.  The publishers are telling funders “oh yes of COURSE we link to the free articles on our page” but then doing it a way that makes it really unlikely it will actually improve Public Access.  #notcool

It’s pretty disappointing, but not very surprising. Counting on toll-access publishers to implement our infrastructure is kinda likely to end up this way, yeah?

Anyway,  I don’t know if anyone is looking at these issues systematically, but I think doing so would be a great idea so we can start making noise to our funders and our publishers to do better.

[original twitter thread on this]

 

 

July 12, 2013

Full text: Value all research products

Filed under: Uncategorized — Heather Piwowar @ 7:48 am

As per the Nature copyright assignment form for Comments, I can post the text of my Comment 6 months after publication.  That’s this week, so here it is!

Piwowar H. (2013). Value all research products, Nature, 493 (7431) 159-159. DOI:

For more on this article, see previous blog posts:

Value all research products

A new funding policy by the US National Science Foundation represents a sea-change in how researchers are evaluated, says Heather Piwowar

What a difference a word makes. For all new grant applications from 14 January, the US National Science Foundation (NSF) asks a principal investigator to list his or her research “products” rather than “publications” in the biographical sketch section. This means that, according to the NSF, a scientist’s worth is not dependent solely on publications. Data sets, software and other non-traditional research products will count too.

There are more diverse research products now than ever before. Scientists are developing and releasing better tools to document their workflow, check each other’s work and share information, from data repositories to post-publication discussion systems. As it gets easier to publish a wide variety of material online, it should also become easy to recognize the breadth of a scientist’s intellectual contributions.

But one must evaluate whether each product has made an impact on its field — from a data set on beetle growth, for instance, to the solution to a colleague’s research problem posted on a question-and-answer website. So scientists are developing and assessing alternative metrics, or ‘altmetrics’ — new ways to measure engagement with research output.

The NSF policy change comes at a time when around 1 in 40 scholars is active on Twitter1, more than 2 million researchers use the online reference-sharing tool Mendeley (see go.nature.com/x63cwe), and more than 25,000 blog entries have been written about peer-reviewed research papers and indexed on the Research Blogging platform2.

In the next five years, I believe that it will become routine to track — and to value — citations to an online lab notebook, contributions to a software library, bookmarks to data sets from content-sharing sites such as Pinterest and Delicious. In other words, to value a wider range of metrics that suggest a research product has made a difference. For example, my colleagues and I have estimated that the data sets added to the US National Center for Biotechnology Information’s Gene Expression Omnibus in 2007 have contributed to more than 1,000 papers3, 4. Such attributions continue to accumulate for several years after data sets are first made publicly available.

In the long run, the NSF policy change will do much more than just reward an investigator who has authored a popular statistics package, for instance. It will change the game, because it will alter how scientists assess research impact.

The new NSF policy states: “Acceptable products must be citable and accessible including but not limited to publications, data sets, software, patents, and copyrights.” By contrast, previous policies allowed only “patents, copyrights and software systems” in addition to research publications in the biography section of a proposal, and considered their inclusion to be a substitute for the main task of listing research papers.

Still, the status quo is largely unchanged. Some types of NSF grant-renewal applications continue to request papers alone. Indeed, several funders — including the US National Institutes of Health, the Howard Hughes Medical Institute and the UK Medical Research Council — still explicitly ask for a list of research papers rather than products.

Even when applicants are allowed to include alternative products in grant applications, how will reviewers know if they should be impressed? They might have a little bit of time to watch a short video on YouTube demonstrating a wet-lab technique, or to read a Google Plus post describing a computational algorithm. But what if the technique takes more time to review, or is in an area that is outside the reviewer’s expertise? Existing evaluation mechanisms often fail for alternative products — a YouTube video, for example, has no journal title to use as a proxy for anticipated impact. But it will definitely receive a number of downloads, some ‘likes’ on Facebook, a few Pinterest bookmarks and discussion in blogs.

Tracking trends

Many altmetrics have already been gathered for a range of research products. For example, the data repositories Dryad and figshare track download statistics (figshare is supported by Digital Science, which is owned by the same parent company as Nature). Some repositories, such as the Inter-university Consortium for Political and Social Research, provide anonymous demographic breakdowns of usage.

Specific tools have been built to aggregate altmetrics across a wide variety of content. Altmetric.com (also supported by Digital Science) reveals the impact of anything with a digital object identifier (DOI) or other standard identifier. It can find mentions of a data set in blog posts, tweets and mainstream media (see go.nature.com/yche8g). The non-profit organization ImpactStory (http://impactstory.org), of which I am a co-founder, tracks the impact of articles, data sets, software, blog posts, posters and lab websites by monitoring citations, blogs, tweets, download statistics and attributions in research articles, such as mentions within methods and acknowledgements5. For example, a data set on an outbreak of Escherichia coli has received 43 ‘stars’ in the GitHub software repository, 18 tweets and two mentions in peer-reviewed articles (see go.nature.com/dnhdgh).

Such altmetrics give a fuller picture of how research products have influenced conversation, thought and behaviour. Tracking them is likely to motivate more people to release alternative products — scientists say that the most important condition for sharing their data is ensuring that they receive proper credit for it6.

The shift to valuing broad research impact will be more rapid and smooth if more funders and institutions explicitly welcome evidence of impact. Scientists can speed the shift by publishing diverse research products in their natural form, rather than shoehorning everything into an article format, and by tracking and reporting their products’ impact. When we, as scientists, build and use tools and infrastructure that support open dissemination of actionable, accessible and auditable metrics, we will be on our way to a more useful and nimble scholarly communication system.

References

  1. Priem, J., Costello, K. & Dzuba, T. Figshare http://dx.doi.org/10.6084/m9.figshare.104629 (2012).
  2. Fausto, S. et al. PLoS ONE 7, e50109 (2012).
  3. Piwowar, H. A., Vision, T. J. & Whitlock, M. C. Nature 473, 285 (2011).
  4. Piwowar, H. A., Vision, T. J. & Whitlock, M. C. Dryad Digital Repository http://dx.doi.org/10.5061/dryad.j1fd7 (2011).
  5. Oettl, A. Nature 489, 496–497 (2012).
  6. Tenopir, C. et al. PLoS ONE 6, e21101 (2011).

July 11, 2013

hypocrite?

Filed under: Uncategorized — Heather Piwowar @ 7:07 am

Quick post on an important topic.  I don’t publish in OA journals 100% of the time…. almost always, but not always.  What gives?

This deserves more discussion than I’ll give it in this post, but here’s a recent response to someone who was rightly curious about the following links to my three most recent papers:

http://dx.doi.org/10.1002/bult.2013.1720390404
http://dx.doi.org/10.1002/bult.2013.1720390405
http://www.nature.com/nature/journal/v493/n7431/full/493159a.html

Here was my response:

Great question.  The first two are actually also available OA, under a CC-BY license (though I was disappointed to see the publishers didn’t make the CC-BY license clear, and I didn’t know that they would also appear in a format that makes it look like they are subscription only… I’m unhappy about that but oh well).
The final one was upon invitation from Nature.  I hemmed and hawwed about whether to do it, since it wasn’t OA, but decided ultimately to go for it because (a) it was an opinion piece not primary research, and (b) if opinion pieces about open things are only published in open places, they miss many audiences that aren’t yet convinced about the value of open.  I tried to mitigate lack of open by publishing as much preprint about it as I could, negotiating hard for no paywall for its first week, and I’ll post the preprint as soon as I can (I think  July 10th?  will double check)
The purist-loving part of me does wish I only published in OA places and is unsettled when I don’t.  More than that, though, I want to change the world to be more open, and I think I can do that best by sometimes publishing in places that aren’t OA.  (yet.  ;)  )
Edited:  moved the “sometimes” in the last sentence to better capture my meaning.

Sending a message

Filed under: Uncategorized — Heather Piwowar @ 6:43 am

More and more of us are being choosy about where to place our publishing-related efforts:  we say no to reviewing requests for non-OA journals, we preferentially choose to publish in OA journals,  we refuse to publish in Elsevier journals… we figure out where our principles meet our pragmatic needs and we make decisions accordingly.

I’ve heard people wonder whether they should describe *why* they are declining a review or an article invitation, instead of just saying no.  A few months ago I was invited to contribute a paper to an Elsevier journal as part of a special collection.  I responded as such:

Thanks for the invitation.  It sounds like a great special issue!
That said, I’ve signed the Elsevier boycott. In the spirit of encouraging you to understand how seriously some scholars dislike Elsevier’s current policies and wish you would move your journal to a truly Open publisher I’m not willing to write anything for your publication.
Hopefully we’ll have a chance to collaborate another way some day.
Sincerely,
Heather

And here, in part, was the response from the guest editor (posted with permission):

Thanks! I think this is great. We will make a special point about how two have refused to publish with Elsevier and how the quality of Elsevier journals suffers because of its policies and lobbying. That way your message will reach the audience (and editors).

If we want people to hear that we — as scholars, librarians, students, the public — want change, then we need to speak up.  We’ll be surprised how often our voices reinforce those of others and are forwarded on.

Older Posts »

Blog at WordPress.com.