This is a work in progress.
Last updated: February 12, 2026.
Our group’s mission is to produce rigorous, reproducible, high-impact computational materials/quantum simulation research, including new methods, careful benchmarking/validation, and transparent research artifacts (code, workflows, datasets).
Authorship, conference travel, internships, and external introductions are some of the strongest incentives a PI can offer. This policy is designed to reward the behaviors that advance our long-term vision, especially:
taking ownership of hard problems,
developing/validating new methods,
making work reproducible and reusable,
mentoring and enabling others (not gatekeeping),
writing clearly and responsibly.
We explicitly want to avoid a culture where people compete for “easy credit” (e.g., running a standard workflow once) while neglecting the difficult work (method development, rigorous design, failure analysis, uncertainty, robust documentation).
This document is inspired in part by the Kosslyn “points” approach and aligns with common publication-ethics expectations (e.g., ACS/IEEE/ACM) as well as broader research integrity guidance.
No honorary authorship. Authorship is earned through real intellectual/creative contribution and accountability.
Transparency early, not late. We discuss authorship expectations at project kickoff, then revisit periodically.
Accountability is required. Authors must be able to explain/defend their part of the work and approve the final manuscript.
Reproducibility is foundational. If the work cannot be reproduced from archived inputs/outputs/scripts (and documented decisions), credit is at risk.
Incentives should push “hard work.” The system is intentionally structured to reward contributions that are difficult to replace (new ideas + design + methods + validation + interpretation), not just time spent.
A person qualifies for authorship when they make a substantial intellectual/creative contribution to one or more core research phases and they meet accountability expectations (review/approval of the manuscript and ability to defend their contribution).
In computational materials/physics, “idea” and “design” are often inseparable: posing the right question usually implies choosing the right Hamiltonian/model, approximations, computational protocol, validation plan, benchmarks, and failure modes. This policy treats those as tightly coupled.
For every paper-level project, when possible, we create a lightweight Contribution Plan (one page is enough) that includes:
the scientific question + scope,
the proposed computational/experimental-design logic (protocol, controls/benchmarks, success criteria),
who owns which components,
expected artifacts (scripts, inputs, data products, figures, etc.),
the initial authorship expectations and how we’ll track contributions.
We revisit it at milestones (e.g., after the first benchmark suite; before drafting; before submission).
We use a default total of 1000 points. Points are split among contributors in each phase proportional to contribution.
Replaceability principle: contributions that are hard to replace (special expertise, unique insight, rescuing a failure mode, creating a reusable tool) should be weighted more heavily within a phase. (This is the “don’t reward only low-hanging fruit” mechanism.)
Phase A : Scientific question + study design (200 pts)
Framing the question; choosing the right physical model/assumptions.
Designing the computational protocol: controls, benchmarks, ablations, convergence strategy, uncertainty/error analysis plan.
Defining what would falsify the claim (failure modes).
Phase B: Method/protocol development + validation (250 pts)
New workflows, new models, new analysis methods, new metrics.
Validation against benchmarks, ground truth, alternative methods.
Robustness checks; identifying and fixing methodological flaws.
Phase C: Implementation & research infrastructure (150 pts)
Building/maintaining the codebase/workflow (scripts, pipelines, tests, containers).
Creating reusable datasets, automated post-processing, provenance tracking.
Significant optimization or enabling functionality (not cosmetic refactoring).
Phase D: Data generation / computational experiments (100 pts)
Running calculations/simulations with scientific judgment (parameter selection, triage, debugging, diagnosing failure).
Pure “button pushing” of a mature workflow typically earns minimal credit unless paired with insight/debugging.
Phase E: Analysis, interpretation, and scientific storytelling (200 pts)
Extracting physical insight, constructing the narrative.
Developing key plots/figures that reveal the phenomenon (not just prettifying).
Interpreting anomalies; sensitivity analysis; uncertainty quantification.
Phase F: Writing, revision, and peer-review execution (100 pts)
Drafting sections that survive to the final paper.
Integrating coauthor feedback into a coherent manuscript.
Responding to reviewers with technical substance and new analyses.
Note: These are defaults. Some projects legitimately shift weight (e.g., a methods paper may move points from D → B/C; a perspective/review may shift more into F/E).
<10% ( <100 pts ): acknowledgement (unless the contribution is uniquely enabling—rare but possible).
10–19% (100–199 pts): co-author (typically middle author).
≥20% (≥200 pts): major author contribution (often early author list).
Lead author: typically the person with the largest total contribution and who drives the scientific narrative and drafting.
Co-first authors are allowed when two people truly share leadership-level contributions.
Last author is typically the PI/senior lead responsible for stewardship, unless otherwise agreed.
Before submission, each contributor submits a short self-assessment of contributions; the leadership team reconciles with the project log and the Contribution Plan.
Being present in meetings.
Proofreading/wordsmithing without changing scientific content.
Pure formatting of figures/manuscripts (resolution, aesthetics only).
Re-running old calculations purely for training.
Suggesting a paper to read (helpful, but not authorship credit by itself).
Routine execution of a mature workflow without scientific decision-making.
Exception: If any of the above activities uncover a flaw, improve the method, change interpretation, fix a bug, or enable reproducibility, they can become credit-worthy.
In computational research, data management is part of the science. If you cannot provide the artifacts needed to reproduce your results, you may lose authorship credit.
Minimum expectation (project-specific):
archived inputs, outputs, and key scripts,
version info (code commit, environment, pseudopotentials/basis sets, etc.),
provenance notes: “why we chose X,” not just “we ran X.”
Opportunities we allocate based on demonstrated behaviors:
conference travel / spotlight talks,
internship recommendations and external introductions,
leading collaborations,
first-author ownership of the next project.
What we reward most (high leverage):
method development + validation,
owning difficult debugging and failure analysis,
building reusable infrastructure/datasets,
clear, reproducible documentation,
mentoring others and raising group capability.
What we discourage (low leverage):
competing for trivial tasks to accumulate superficial credit,
gatekeeping knowledge/tools,
avoiding hard uncertainty/robustness checks,
“credit chasing” without ownership.
We talk early and often; surprises at the end are a process failure.
If a dispute arises: we review the Contribution Plan, the contribution log, and the reproducibility artifacts.
Final decision responsibility sits with the PI, with the goal of fairness, transparency, and protecting the group’s standards.
Read the condensed Slack version (below).
Ask your project lead how contributions are being tracked.
Keep a weekly log of what you did, what decisions you made, and what artifacts you produced.
If you want authorship/leadership opportunities: prioritize hard, enabling work and make it reproducible.
Kosslyn points-based authorship criteria (lab page / PDF):
https://kosslynlab.fas.harvard.edu/file_url/110
National Academies: On Being a Scientist (Responsible Conduct in Research):
https://www.nationalacademies.org/projects/PGA-COSEMPUP-18-P-40/publication/12192
ACS publication ethics (overview hub):
https://researcher-resources.acs.org/publish/publication_ethics
IEEE publishing ethics / guidelines hub (links to PSPB Operations Manual sections):
https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/fundamental-publishing-guidelines-and-principles/
ACM policy on authorship (and related policies):
https://www.acm.org/publications/policies/new-acm-policy-on-authorship
https://www.acm.org/publications/policies
Alberts (Science editorial): “Promoting scientific standards” (2010):
https://www.science.org/doi/10.1126/science.1185983
Scientific Authorship (Biagioli & Galison, 2003) info:
https://www.routledge.com/Scientific-Authorship-Credit-and-Intellectual-Property-in-Science/Biagioli-Galison/p/book/9780415942935
Authorship in Mendoza Group (TL;DR)
Authorship rewards real intellectual/creative contribution + accountability.
We decide expectations at project kickoff (Contribution Plan), then revisit.
In our field, idea + study design are inseparable (model, protocol, controls, benchmarks).
Default scoring = 1000 pts across phases (question/design; method+validation; infrastructure; data gen; analysis; writing).
<10% = acknowledgement; 10-19% = author; ≥20% = major author; lead author = biggest driver + writing/story.
“Button pushing,” formatting, and meeting attendance don’t count unless they add scientific insight or fix failures.
Reproducibility is required: archive inputs/outputs/scripts + decisions.
We reward “hard work”: methods, validation, debugging, reusable tools, mentoring.
Opportunities (travel/internships/introductions/leadership) follow the same incentives.
If unsure: ask early; no surprises at submission.
This is version has been super seeded by the new version above.
Last updated: February 11, 2026.
A substantial creative contribution in one or more of the following phases of research is sufficient in my lab to warrant inclusion as an author of the paper reporting the theoretical work, or simulations or calculations experiment(s).
A lesser creative contribution warrants an acknowledgement in the footnote of the paper (and we will let the individual know that he/she will be acknowledge in a paper before listing them). We will determine whether someone deserves either of these credits, and determine the ordering of authors, by counting up each person's contribution to each phase. As noted below, we assign a larger weight to the first and last phases, and to any other phase that requires special expertise or creativity (e.g., data analysis, in some cases).
In my lab, we consider 6 criteria, and weight them as follows; often the "points" at each stage are divided among several people. If a person *contributes creatively* at any of these phases, that is enough to qualify him or her for an acknowledgment or as a co-author, depending on the magnitude of the contribution. Moreover, the "replaceability criterion" leads us to ask whether one person's contributions could just as easily have been made by others; if not (i.e., if the person would have been difficult to replace), that contribution is weighted more highly.
The point totals of each phase should be agreed upon in advance; some projects, for example, use standard designs (e.g., "Stroop") or analyses (e.g., correlations), in which case the number of points for that phase should be reduced. The following are "default" point values for a typical research paper, with a total of 1000. The total points for each phase is divided among authors in proportion to their contribution in that phase of the project. In my lab, if someone contributed more than 0 but less than 15% of the total number of points, they are acknowledged in the footnote. If they contributed at least 15%, they are an author, and the ordering of authorship is determined by the relative number of points.
1. The idea (250 points): Without the idea, nothing else happens. If the idea grew out of a discussion, all who contributed get "credit"--but perhaps not equally so, if one or more people were primarily responsible for the insights leading to the best way to pose the question to be answered by the research and the logic of the design. Typically the person doing and generating the ideas is the supervisor, so some of the points may go to the supervisor.
2. The design (100 points): The details of the design include counterbalancing issues, control conditions, whether a within-subjects or between-subjects design is used, and so on. A bad design later will render the results useless, so this is a critical step. Typically the person doing de design is the supervisor, so some of the points may go to the supervisor.
3. The implementation (100 points): Someone must implement the design into actual materials, devise instructions, and so on. To the extent this is simple boilerplate (a variation on well-developed methods using available materials), this step may be given much less weight (perhaps only 5 points). Typically the person doing the implementation is supervised closely, so some of the points may go to the supervisor.
4. Conducting the theoretical/numerical experiments (100 points): The person who tests computer codes *can* earn up to 100 points, but may earn merely 5 points if all they do is mindlessly test the written codes. Authorship is awarded only to those who contribute substantially and creatively to a project; if someone is receiving class credit or payment and all they do is follow instructions and test subjects, this is worthy of an acknowledgment in the paper, but not authorship. On the other hand, if they notice what codes are actually doing and make constructive suggestions for how to improve the theoretical/numerical experiment, this qualifies them to be included as an author. Specifically, if one notices problems in the method or procedure (and re-program or recompile a new version), and makes constructive suggestions about how to repair them, observes interesting hints about what's really going on in the debriefings, and so on, this counts as a substantial creative contribution at this stage. Typically the person doing the theoretical/numerical experiments are several authors (including the supervisor) so the points are distributed accordingly.
5. Data analysis (200 points): Simply running the data through an program is not enough to earn authorship at this phase. However, devising some new way to look at the data (e.g., as difference scores or ratios of some kind), or otherwise contributing a novel insight into the best way to reveal the underlying patterns in the data, may be sufficient. Particularly labor-intensive or creative data analysis, such as involved in analyzing wave functions and other data intensive or code intensive codes, can "earn" more points. Depending on the project, the maximum of 200 points may or may not be allocated. Typically the person doing the theoretical/numerical experiments are several authors (including the supervisor) so the points are distributed accordingly.
6. Writing (300 points). Nothing happens if the results are not reported. Writing is usually shared by several people. Credit is allocated primarily to the one who shapes the conceptual content, although a good and insightful literature review also counts heavily (however, suggesting papers to read does not count as points). If someone writes a first draft that is not used at all, this does not contribute towards points: good intentions are not enough; the question is who has contributed how much to the final product. Similarly, the sheer amount of time one has spent on the project is not relevant; competent people who work more efficiently should not be penalized. Typically the person doing the writing, drafts, and revisions is the supervisor or is supervised closely, so some of the points may go to the supervisor.
Following the guidelines above, the following does not count towards any points in authorship:
- Being around a meeting about this project or research paper
(otherwise office neighbors would be in everyone's paper)
- Modify the text or word-smiting paragraphs of a written manuscript with results already given
(otherwise secretaries/editors/referees would be in everyone's paper)
- Modify the figures to make it look better or with higher resolution
(otherwise image editors would be in everyone's paper)
- Repeat previous results or calculations as part of training
(otherwise by checking past papers, we should all be in their papers)
- Recommend a paper or related reading to further understand a concept
(googling something and recommend it for read does not count, otherwise everyone is social media should be in everyone's papers.)
If there is a dispute, authors who cannot provide raw data will not be co-authors. Data management is step one.
The key to fair allocation of authorship, and equitable ordering, is to have criteria that are known to all and that all can discuss. It is best to walk through each of these criteria at the outset of the project. In addition, in my lab each contributor sends his or her own assessment of their contribution after the project is relatively complete but *before* the paper is written. If someone is near the total required to be an author but not quite there, they are offered the opportunity to take a larger role in the writing or data analysis process—thereby allowing him or her to accrue more points (however, if this opportunity is not taken the first time, there will not be a second chance). The main message is to give each other constant updates on the different projects and be organized about it, this way your contribution is stated throughout.
References:
1. Authorship Criteria (Stephen M. Kosslyn)
2. https://books.google.com/books/about/Scientific_Authorship_2003
3. https://www.nap.edu/catalog/12192/on-being-a-scientist-a-guide-to-responsible-conduct-in
4. http://science.sciencemag.org/content/327/5961/12.full
5. https://jamanetwork.com/journals/jamanetworkopen/pages/instructions-for-authors (see end)
This is a draft that will updated, some new models that have proposed are below:
ACS: https://pubs.acs.org/userimages/ContentEditor/1218054468605/ethics.pdf
AMS: https://www.ams.org/about-us/governance/policy-statements/sec-ethics
ACM: https://www.acm.org/publications/policies
In general, papers are written using Overleaf/LaTeX, with supplementary files stored in Dropbox. Please follow these essential steps to ensure high-quality output:
Ensure most figures, if not all, are in vector format for clarity and scalability. [See the guide]
Follow the manual for sharing code on GitHub to ensure reproducibility. [See the guide]
When preparing files for ArXiv, refer to the detailed instructions provided: [pptx link] [Tutorial in video].
The main differences of preparing submissions for ArXiV and Journal are listed below:
ArXiV submission requires all the files (manuscript, figures and bibliography) must be in the same folder.
ArXiV submission requires the main content and supporting information must be combined into the a single LaTeX file.
By following these guidelines, you’ll help ensure that our submissions meets the standards for both preprints and journal publications.
Use MSU Turnitin (for checking the similarity index) to be aware of the percentage match in your paper from different sources and reduce the percentage to its recommended level before submission of your Manuscript to a journal. Please follow the link for the use of Turnitin to guide you.
Turnitin Manual: https://www.dropbox.com/s/msyxba0l580798b/Turnitin_Manual.pdf?dl=0
Turnitin’s similarity checking guides students/instructors/individuals about the appropriate use of sources, paraphrasing, over-quoting, and citing. It offers students the ability to "opt out" of the database and protect us from text imitation. Papers submitted to Turnitin may be compared against billions of internet documents, archived internet data that is no longer available on the live web, a local database of previously submitted papers, and subscription database of periodicals, journals, and publications. The users can view the Originality Reports for their own submissions on Turnitin and Similarity Report (comprised of lists of all the matching sources, including the percentage of text that matches and a link to the online content). It will guide you to the matching content percentage: A lower percentage rating indicates that the content is likely original and has not been copied; a higher percentage rating indicates the content is likely not original and has been copied from another source. Please follow the Turnitin Manual to get an idea of the acceptable percentage.
You can follow this link
You can select any journal of APS, ACS, Nature Publishing group, AAAS or other publisher.
However, if you want the paper we are working on to be open source, you can check these options that are available through MSU:
https://lib.msu.edu/about/collections/scholcomm/support/
A new addition to this list is all ACS Journals!!! so we can publish in any of them.
American Chemical Society (ACS). MSU Corresponding authors can publish open access for no cost to them ... thanks to a Read and Publish agreement through the Midwest Collaborative for Library Services for 2024-2026.
"Pretty much all ACS journals are included except a few odd ones that are new and listed on the link https://acsopenscience.org/customers/mcls/ that seem to have waivers anyway. It’s an agreement for unlimited MSU corresponding author publishing during 2024-2026, so there’s no chance of running out of funds. "
Dear all,
If you want the paper we are working on to be open source, you can check these options that are available through MSU:
https://lib.msu.edu/about/collections/scholcomm/support/
Royal Society Publishing. MSU Libraries has an agreement with the Royal Society for 2023 to cover open access publishing by MSU corresponding authors for all article types in all their journals.
Philosophical Transactions A: https://royalsocietypublishing.org/journal/rsta
Proceedings of the Royal Society A: https://royalsocietypublishing.org/journal/rspa
Wiley: The Big Ten Academic Alliance has a Read & Publish agreement with Wiley to cover unlimited open access publishing in Wiley hybrid and Wiley or Hindawi gold open access journals for MSU corresponding authors for 2023-2025.
Wiley's hybrid journals included:
Angewandte Chemie International Edition
Advanced Theory and Simulations
Advanced Materials
Advanced Energy Materials
Annalen der Physik: https://onlinelibrary.wiley.com/journal/15213889
The MSU Libraries supports "Subscribe to Open" journal projects from several publishers : Annual Reviews
Annual Review of Condensed Matter Physics: https://www.annualreviews.org/journal/conmatphys
Annual Review of Physical Chemistry: https://www.annualreviews.org/journal/physchem
Annual Review of Materials Research: https://www.annualreviews.org/journal/matsci
The MSU Libraries supports "Subscribe to Open" journal projects from several publishers : American Institute of Physics' Journal of Applied Physics (JAP) and Physics of Plasmas,
Journal of Applied Physics: https://pubs.aip.org/aip/jap
IOP (Institute of Physics). The Big Ten Academic Alliance has signed a Read and Publish agreement to cover unlimited open access publishing by MSU corresponding authors for eligible IOP journals (not including AAS) for 2023-2025.
2D Materials: https://iopscience.iop.org/journal/2053-1583
Journal of Physics: Condensed Matter: https://iopscience.iop.org/journal/0953-8984
Journal of Physics - Applied Physics: https://iopscience.iop.org/journal/0022-3727
Nanotechnology: https://iopscience.iop.org/journal/0957-4484
Semiconductor Science and Technology: https://iopscience.iop.org/journal/0268-1242
Modelling and Simulation in Materials Science and Engineering: https://iopscience.iop.org/journal/0965-0393
Quantum Science and Technology: https://iopscience.iop.org/journal/2058-9565
Royal Society of Chemistry. MSU Libraries has an agreement with RSC for 2024-2026 (including articles submitted Oct-Dec 2023) for unlimited open access publishing of all articles types for MSU corresponding authors in both hybrid and gold OA journals.
With this agreement, some journal that we can try to publish are:
Energy & Environmental Science: https://www.rsc.org/journals-books-databases/about-journals/energy-environmental-science/
Journal of Materials Chemistry A: https://www.rsc.org/journals-books-databases/about-journals/journal-of-materials-chemistry-a/
Chemical Science: https://www.rsc.org/journals-books-databases/about-journals/chemical-science/
Nanoscale: https://www.rsc.org/journals-books-databases/about-journals/nanoscale/
Physical Chemistry Chemical Physics: https://www.rsc.org/journals-books-databases/about-journals/pccp/
Sustainable Energy & Fuels: https://www.rsc.org/journals-books-databases/about-journals/sustainable-energy-fuels/
Chem Soc Rev: https://www.rsc.org/journals-books-databases/about-journals/chem-soc-rev/
or others.
Cambridge University Press. MSU corresponding authors may publish research articles open access in over 370 Cambridge University Press (CUP) gold and hybrid journals during 2021-2023 for no cost to them. Through the Midwest Collaborative for Library Services consortium, the MSU Libraries has signed a three-year "transformative" read-and-publish agreement with CUP to pay for those fees out of library funds.
MRS Bulletin: https://www.cambridge.org/core/journals/mrs-bulletin
MRS Advances: https://www.cambridge.org/core/journals/mrs-advances
MRS Communications: https://www.cambridge.org/core/journals/mrs-communications
MRS Energy & Sustainability : https://www.cambridge.org/core/journals/mrs-energy-and-sustainability
I have put this in the Wiki, but you can update it as you discover more journals we might be able to do open access through MSU:
(edited)
lib.msu.edu
MSU Support for Open Access Publishing | MSU Libraries
How does MSU support open access publishing?In March, 2022, the MSU University Council passed a resolution: Encouragement of openly accessible scholarship.For information on specific subject areas, contact your subject librarian.For open data deposit, consult Digital Scholarship Services to help you plan for the creation of data sets as a research output and ideas