The Foundations of Research Computing (FORC) Camp is a three day data skills immersion program offered to graduate students through a collaboration among the GSAS Master's College, the Arts & Science Office of Teaching Excellence and Innovation, and Data Services (NYU Libraries and IT). FORC will give participants a thorough grounding in the digital skills essential for their research. There is something for every disciplinary approach, from creating visualizations that require no coding skills to data harvesting and statistical analysis. Choose the track that is right for your needs – in the Project Examples sections, we’ve included links to research projects NYU graduate students have worked on using the skills covered in each track.
This summer we’ll be expanding the offerings to include the two Generative AI tracks we piloted at the Jterm FlashFORC, Noncoding Approaches to Using Generative AI for Research and Configuring Generative AI for Research Using Python. That will bring FORC to a total of 5 tracks, two that do not require coding and three that teach coding skills. In addition, all participants will get an overview of possibilities as well as data and ethical considerations for using Generative AI in research.
Each day’s schedule will include four hours of interactive instruction followed by a 90 minute office hours/tea time that offers opportunities for one on one consults, meeting with subject area librarians, and building connections with your graduate student colleagues. An optional 90 minutes of homework will allow you to apply digital skills to your own research for feedback from FORC instructors. Lunch will be provided daily.
Participants who complete all 12 hours of the FORC curriculum will receive a letter of completion for their portfolio detailing the skills covered in their track.
Send inquiries to asteaching@nyu.edu.
If your research trajectory doesn’t require you to learn coding, but you still want to be able to create, analyze, and display data sets using methods like mapping, text searching, visualization, and digital repositories, this is the track for you. We'll look at ways to incorporate digital storytelling and the different types of software platforms available to visualize your research data. See full Track One details here.
Here is a sampling of projects NYU graduate students have worked on using the types of skills covered in this track:
Creating Interactive Maps: Mapping Artistic Activism Project (MAAP),
Building Digital Displays from Archives: Visualizing the Victorian Polar Network
Assembling a text corpus: Digitizing Chemical Humanities,
Creating Data Visualizations: Insuring Slavery: Underwriting Risk in the 18th Century
Building Website Repository for Research Artifacts: Archive of Cuban Socialism.
Generative AI offers an exciting opportunity to interact with large amounts of data and discover connections in ways that were not previously possible with traditional research methods. However, commercial Generative AI products offer a “black box” for users, who may not know what Large Language Model (LLM) is being used or how queries are being modulated, all factors that can affect the quality and usefulness of outputs. In this track, you will learn about differences in LLM offerings and their applications, standard methods for modulating outputs, and basic contours for narrowing and improving outputs using a retrieval-augmented generation (RAG) workflow. Students will learn how to configure NotebookLM to meet their research needs. No coding skills are required. See full Track Two details here.
If you’re ready to get out of Excel and learn some simple coding functions to assemble, analyze, and display research data, this is the track for you. Participants will learn how to use basic Python to automate tasks and harvest and manipulate data. This track also will get you ready should you want to learn more robust coding in the future (whether Python or options like R or Javascript). See full Track Three details here.
Here is a sampling of projects NYU graduate students have worked on using the types of skills covered in this track:
Extraction of Word Embeddings from a Corpus: Framing Democracy: Characterizing China's Negative Legitimation Propaganda using Word Embeddings
Automated extraction of semantic motifs from a large text corpus: Who Kisses Whom: Gendered Interaction in American Novels 1880-2000
DH Project using Python/Flask: Demystifying the Digitization of Texts: New Textual Analysis for the Medieval History of Islamic Mysticism, A Corpus of Digitally Neglected Texts
If your research will involve statistical analysis of data, this track will give you the thorough grounding in R required for your graduate work. This course is useful even for those who have dabbled in R before, because it provides the foundational skills that will allow you to easily move on to more advanced applications and ensure your mastery of research essentials such as reproducibility. See full Track Four details here.
Here is a sampling of projects NYU graduate students have worked on using the types of skills covered in this track:
Analysis of a publicly available data set: Do Black & LatinX Students in NYC Have Equal Access to Computer Science Instruction?
Data Set Creation and Text Analysis: Uncovering the Mui Tsai Experience
Data Set Creation and Statistical Analysis: Testing the effects of Facebook usage in an ethnically polarized setting (data set is here)
In this track, users will learn about Retrieval Augmented Generation (RAG) and how it enhances AI models by combining external data retrieval with large language models (LLMs). They will explore the steps to build a RAG pipeline, including embedding text into vector representations, retrieving relevant context from databases, and augmenting prompts to generate accurate answers. The tutorial provides practical insights into when to use RAG over fine-tuning models and how to integrate this approach in building dynamic, context-aware AI solutions.
For additional information, see Retrieval Augmented Generation (RAG) page.
Prerequisites: basic understanding of python