AI for Economists: Prompts & Resources

Notes on prompt examples

This page contains example prompts and responses intended to showcase how generative AI, namely LLMs like GPT-4, can benefit economists.

Example prompts are shown from six domains: ideation and feedback; writing; background research; coding; data analysis; and mathematical derivations.

The framework as well as some of the prompts and related notes come from Korinek, A. 2023. “Generative AI for Economic Research: Use Cases and Implications for Economists“, Journal of Economic Literature, 61 (4): 1281–1317.

Each application area includes 1-3 prompts and responses from an LLM, often from the field of development economics, along with brief notes. The prompts will be updated periodically.

Code is shown in green font, while comments within prompts and responses are in red.

The responses are generated using the GPT-4 model via (1) OpenAI's API, (2) the ChatGPT Plus web interface, and (3) Microsoft Copilot. In addition to the prompts and responses, each example specifies which method is used. Each application area includes Korinek's subjective rating of LLM capabilities as of September 2023, with minor changes after accounting for improvements by January 2024:

⚫ = "Highly useful; incorporating this into your workflow will save you time."

= "Useful; requires oversight but will likely save you time."

◯ = "Experimental; results are inconsistent and require significant human oversight." 

Note the following:

Ideation and feedback

Brainstorming  ⚫

LLMs excel in rapidly producing a high volume of ideas based on appropriate prompts. This can be very useful to researchers, as only a few impactful suggestions can considerably increase the efficiency of research efforts. Prompt #1 and #2 below offer simple examples.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Can you brainstorm 10 ways in which developing countries can improve tax revenue mobilization? Limit your response to 25 words for each point.

RESPONSE:

1. Broadening the Tax Base: Expand the tax net to include informal sectors and untaxed areas, reducing dependency on a narrow tax base and increasing potential revenue.

2. Tax Education: Implement educational programs to inform citizens about tax responsibilities and benefits, fostering a culture of compliance and increasing voluntary tax payments.

3. Digitalization of Tax Collection: Adopt digital tax collection systems to reduce evasion, improve efficiency, and make it easier for taxpayers to comply with tax regulations.

4. Strengthening Legal Frameworks: Update tax laws to close loopholes, combat evasion, and ensure that tax policies are fair, transparent, and enforceable.

5. Improving Tax Administration: Invest in training tax officials, enhancing their capacity to enforce laws, conduct audits, and reduce corruption within the tax system.

6. Addressing Informal Economy: Implement strategies to formalize informal businesses, bringing them into the tax system and expanding the potential tax revenue.

7. International Cooperation: Collaborate with other countries to combat tax evasion, especially in dealing with offshore tax havens and illicit financial flows.

8. Progressive Taxation: Introduce or enhance progressive tax systems where higher income earners pay a larger percentage, ensuring a fair distribution of the tax burden.

9. Tax Incentives Rationalization: Evaluate and limit tax incentives to ensure they are achieving desired economic outcomes without excessively eroding the tax base.

10. Public Expenditure Transparency: Increase transparency in public spending to build trust in the government, encouraging taxpayers to contribute their fair share.

MODEL:

gpt-4-0613 via OpenAI API (temperature=0)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

I am an economist working on VAT in Rwanda. Can you brainstorm an outline for my research on VAT compliance among informal small businesses in Kigali?

RESPONSE:

Certainly, I can help you brainstorm an outline for your research on VAT compliance among informal small businesses in Kigali, Rwanda. Here's a suggested structure for your research:

1. Introduction

2. Literature Review

3. Methodology

4. Findings and Discussion

5. Challenges and Barriers to VAT Compliance

6. Policy Recommendations

7. Conclusion

8. References

9. Appendices

This outline provides a comprehensive structure for your research. It's important to ensure that your study adheres to ethical standards, especially when collecting data from small business owners. Also, consider the impact of recent global economic trends, as they might affect VAT compliance in the informal sector.

MODEL:

GPT-4 via the ChatGPT Plus web interface

Research strategy(not in Korinek, 2023)

LLMs can assist economists in crafting research proposals or strategies by suggesting the application of specific qualitative or quantitative methods tailored to their datasets. By providing information of the data and the research goals, the models can provide recommendations on methodologies, potential analyses, and even identify gaps in the existing approach. Prompt #1 shows a simple example.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

I’d like to estimate the causal effect of a national school feeding programme in Viet Nam, implemented in 36 provinces in 2018, on school enrollment. I have data on school enrollment rates in all of the 96 provinces from 2014 to 2021, along with information on household income and parental education. Which research strategy would you recommend?

RESPONSE:

Given your dataset and the nature of your research question, a Difference-in-Differences (DiD) approach could be a suitable research strategy.

The DiD approach compares the change in school enrollment rates in the provinces where the program was implemented (treatment group) to the change in school enrollment rates in the provinces where the program was not implemented (control group). This method helps to control for time-invariant unobserved characteristics that could affect school enrollment rates.

Here’s a simplified version of how you could set up your DiD model:

1. Define your treatment and control groups: The treatment group consists of the 36 provinces where the program was implemented, and the control group consists of the remaining 60 provinces.

2. Define your pre-treatment and post-treatment periods: The pre-treatment period could be the years from 2014 to 2017, and the post-treatment period could be the years from 2018 to 2021.

3. Estimate the DiD model: You can use a regression model of the following form:

Enrollmentit​=α+β_1 ​Post_t​+β_2 ​Treatment_i​+β_3​ (Post_t​×Treatment_i​)+X_it′​γ+ϵ_it​ [Or as shown in the output:]

where:

The coefficient of interest is β_3​, which captures the differential effect of the treatment on the treated provinces relative to the control provinces.

Remember to check the common trend assumption in DiD models, which assumes that in the absence of treatment, the average outcomes for the treated and control groups would have followed the same trend over time.

This is a high-level overview and the actual implementation might require additional steps and considerations based on the specifics of your data and research question. It’s always a good idea to consult with a statistician or econometrician. Good luck with your research!

MODEL:

GPT-4 Turbo via Microsoft Copilot (Precise mode)

Ideation and feedback

Feedback ◐

LLMs are useful in research settings for their ability to iterate between generating a variety of ideas and then evaluating them. This is akin to how a researcher might first ideate and then refine particular concepts. It allows for a dynamic process of brainstorming, selection, and further enhancement of the most promising ideas. Simple examples of use cases are shown in prompts #1 and #2 below. Based on the information provided (from brainstorming), the latter prompt also makes use of internet searches via Microsoft Copilot to provide viewpoints and ideas for a literature review. See also 'Literature search' and 'Drafting academic text' below.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

What are the top 3 pros and cons of working on research around taxing the digital economy in sub-Saharan Africa?

RESPONSE:

Research on taxing the digital economy in sub-Saharan Africa is a complex and evolving field. Here are the top three pros and cons of working on such research:

Pros:

1. Policy Relevance and Impact:

2. Addressing Tax Evasion and Base Erosion:

3. Innovation and Capacity Building:

Cons:

1. Data Availability and Quality:

2. Rapidly Changing Environment:

3. Complexity and Multidisciplinarity:

Overall, while research on taxing the digital economy in sub-Saharan Africa presents significant opportunities for positive change and innovation, it also faces substantial challenges that must be carefully managed to ensure the research is impactful and relevant.

MODEL:

gpt-4-0613 via OpenAI API (temperature=0)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

I asked you earlier to create an outline for my research on VAT compliance among informal small businesses in Kigali. You suggested a literature review that covers literature on VAT compliance in Rwanda; insights into the characteristics of informal small businesses; and common challenges faced in VAT compliance.

Provide some ideas on key viewpoints and perspectives I should consider for each topic.

RESPONSE:

Sure, here are some key viewpoints and perspectives you should consider for each topic:

1. VAT Compliance in Rwanda

2. Characteristics of Informal Small Businesses

3. Common Challenges Faced in VAT Compliance

Remember to always cite your sources when presenting these viewpoints and perspectives in your research. Good luck with your study!

MODEL:

GPT-4 Turbo via Microsoft Copilot (Precise mode)

Feedback on a complete paper ◐

In November 2023, OpenAI rolled out its "GPT-4 (All Tools)" model, which automatically selects the most appropriate tools for a given task. This update came with an increase in the "content window" size, which now allows for the analysis of up to 32,000 thousand tokens, or about 25,000 words (slightly behind Claude 2, with a context window of 100,000 tokens). 

This allows users to upload long documents, which comes in handy in evaluating draft research papers, as shown in prompts #1 and #2. While GPT-4 and Claude 2 can provide reasonable feedback on qualitative papers, existing LLMs struggle with evaluating more quantitative papers in an insightful manner, often leading to relatively general feedback.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

What are the main strengths and limitations of this paper? [Attachment: wp2023-73-evaluating-impact-personal-income-tax-reform-Rwanda.pdf] Be very specific, avoid general suggestions, and always mention which section you are referring to in your comments.

RESPONSE:

The paper "Evaluating the Impact of the 2023–2024 Personal Income Tax Reform in Rwanda" provides a comprehensive analysis of Rwanda's recent tax reforms using tax-benefit microsimulation modelling. Key strengths and limitations, with specific references to sections, are as follows:

Strengths:

1. Detailed Analysis (Sections 2 and 4): The paper offers an in-depth examination of Rwanda's personal income tax system and the implications of the 2023-24 reforms. The methodology and results are thoroughly detailed, enhancing the paper's credibility.

2. Comparative Approach (Section 2.4): The comparison of Rwanda's tax system with other countries provides valuable context and a broader understanding of the tax reforms' impacts.

3. Methodological Rigor (Section 3): The use of the tax-benefit microsimulation model (RWAMOD) for analysis adds methodological rigor, allowing for a detailed exploration of the reforms' impacts on various income groups.

Limitations:

1. Data Limitations (Section 3.4): The paper acknowledges its reliance on static, non-behavioral models, which might not fully capture dynamic economic behaviors like labor supply changes or tax evasion.

2. Scope of Impact Analysis (Sections 5 and 6): While the paper effectively analyzes the impact of personal income tax reform, it acknowledges that this is just a part of broader fiscal reforms. Therefore, the conclusions are somewhat limited in scope.

3. Potential for Bias in Data (Section 3.4): There's an indication that the EICV dataset used might overrepresent high-income taxpayers, which could potentially bias the results.

Overall, the paper provides a detailed and methodologically sound analysis of Rwanda's tax reforms, although with limitations primarily related to data and the scope of the analysis.

MODEL:

GPT-4 via the ChatGPT Plus web interface (incl. an uploaded PDF document)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

How could I improve sections 1 (Introduction) and 6 (Concluding remarks)? Provide 3-5 bullet points for each section, and be specific. The goal is to publish this work in a leading economics journal.

RESPONSE:

To improve Sections 1 (Introduction) and 6 (Concluding Remarks) of your paper for publication in a leading economics journal, consider the following suggestions:

Section 1: Introduction

1. Contextualize Rwanda's Economic Scenario: Provide more background on Rwanda's economic condition and how it relates to the tax reform. This will help in setting the stage for the relevance of your study.

2. Link to Broader Economic Literature: Draw connections between your study and the broader economic literature on tax reforms in developing countries. Citing key works will enrich the introduction.

3. Clear Research Questions: State your research questions more explicitly. This guides the reader on what to expect in the paper.

4. Methodological Novelty: Briefly highlight the methodological approach you're using, especially if it offers a new perspective or solution to existing issues in tax reform analysis.

5. Impact of Findings: Discuss the potential impact of your findings on policy-making and economic theory, setting up the significance of your work.

Section 6: Concluding Remarks

1. Summarize Key Findings: Concisely restate your main findings, emphasizing their significance in the context of Rwanda's tax policy and broader economic impacts.

2. Policy Implications: Discuss the practical implications of your findings for policymakers in Rwanda and comparable economies.

3. Limitations and Future Research: Acknowledge the limitations of your study and suggest areas for future research. This shows the scope for further development and understanding of the topic.

4. Connect to Introduction: Tie back to the objectives and questions raised in the introduction to create a cohesive narrative.

5. Broader Economic Impact: Reflect on how your findings contribute to the larger body of economic literature and what they imply for the study of tax reforms in developing countries.

These suggestions aim to enhance the clarity, coherence, and relevance of your paper, making it more appealing to a scholarly audience in leading economics journals.

MODEL:

GPT-4 via the ChatGPT Plus web interface (incl. an uploaded PDF document)

Ideation and feedback

Providing counterarguments ◐

LLMs can be helpful for challenging research ideas by presenting counterarguments (as shown in prompts #1 and #2). LLMs provide critiques without personal biases or concerns about ego, aiding in overcoming confirmation bias. These factors make LLMs valuable in presenting comprehensive and relatively unbiased viewpoints and in critically assessing research ideas.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

I am working on an economic model that predicts that eliminating VAT exemptions will increase inequality in a developing country. What are the top 5 counterarguments? One line each.

RESPONSE:

1. Eliminating VAT exemptions can increase government revenue, which can be used for social programs to reduce inequality.

2. VAT exemptions often benefit wealthier individuals more than the poor, so removing them could reduce inequality.

3. The removal of VAT exemptions can encourage fair competition among businesses, promoting economic growth and reducing inequality.

4. VAT exemptions can distort market prices and removing them can lead to more efficient resource allocation, potentially reducing inequality.

5. The burden of VAT can be mitigated for the poor through targeted cash transfers or subsidies, making the removal of exemptions less likely to increase inequality.

MODEL:

gpt-4-0613 via OpenAI API (temperature=0)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

I am working on research to better understand whether UBI is a feasible and effective strategy for reducing poverty in low-income nations. Provide 10 opposing views, with one line each.

RESPONSE:

1. Cost Concerns: UBI may be financially unsustainable for low-income nations with limited fiscal space and high poverty rates.

2. Inflation Risk: Injecting cash into the economy without a corresponding increase in goods and services could lead to inflation, negating the benefits of UBI.

3. Work Disincentive: UBI could reduce the incentive to work, potentially leading to a decrease in labor force participation and productivity.

4. Implementation Challenges: The lack of robust administrative systems in low-income countries could hinder the effective targeting and distribution of UBI funds.

5. Dependency Culture: Regular, unconditional payments might foster a dependency on government support, rather than encouraging self-sufficiency and economic development.

6. Political Exploitation: UBI programs could be manipulated by political leaders, using them as a tool for garnering votes rather than genuinely addressing poverty.

7. Unequal Impact: A flat UBI payment may not adequately address the varying levels of need among the poor, potentially widening inequality within low-income populations.

8. Opportunity Cost: The funds used for UBI could be invested in other areas like education, healthcare, or infrastructure, which may have a more significant long-term impact on poverty reduction.

9. Informal Economy: In countries with large informal sectors, UBI might not reach those who need it most or fail to integrate them into the formal economy.

10. Cultural and Social Factors: UBI may not align with the cultural and social norms of certain societies, where communal support systems or traditional forms of assistance are preferred.

MODEL:

gpt-4-1106-preview via OpenAI API (temperature=0)

Writing and editing

Synthesizing text  ⚫

One key function of LLMs is writing assistance, which includes transforming basic bullet points into cohesive, clearly articulated paragraphs. LLMs excel at reorganizing fragmented ideas into logical, well-structured sentences. It is also possible to integrate examples and potential solutions into the narrative, as shown in prompt #1. While the text generated is usually of high quality, it may sometimes lack the idiosyncracies of human writing. This said, LLMs are adaptable and can write in various styles and formats, including LaTeX-formatted text, as in prompt #2.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Write a paragraph in the style of an academic economist that integrates the following points. Include a topic sentence. 

* progressive taxation plays a big role in mitigating income inequality in developing countries

* the effectiveness of such systems is often hampered by issues like low compliance rates and a large informal sector [discuss additional challenges]

* economists must explore strategies like broadening the tax base, and leveraging technology for better tax enforcement [offer additional solutions]

* it is also important to consider international cooperation to combat tax evasion [also offer specific ways in which this can be done]

* conclude by stating a balance must be struck between ensuring tax fairness and avoiding overly burdensome taxation [discussing why]

RESPONSE:

Progressive taxation is a critical tool for addressing income inequality in developing countries, serving as a mechanism to redistribute wealth and promote economic fairness. However, the effectiveness of such systems is often undermined by challenges such as low compliance rates, a large informal sector, and inadequate tax administration. These issues can lead to a significant tax gap, where the amount of tax owed is greater than the amount that is paid voluntarily and on time. Therefore, economists must explore innovative strategies to enhance the efficacy of progressive taxation. These strategies may include broadening the tax base to include more taxpayers, leveraging technology for better tax enforcement, and implementing stricter penalties for non-compliance. International cooperation is also crucial in combating tax evasion, which could be achieved through information sharing agreements, harmonizing tax policies, and strengthening international tax norms. However, in implementing these measures, a balance must be struck between ensuring tax fairness and avoiding overly burdensome taxation. Overly aggressive tax policies could discourage economic activity and investment, thereby undermining the very growth needed to generate tax revenues. Therefore, while progressive taxation is a powerful tool for income redistribution, its implementation must be carefully calibrated to ensure it promotes economic growth and fairness.

MODEL:

gpt-4-0613 via OpenAI API (temperature=0)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

Synthesize these points about my research goals into a cohesive paragraph for my paper, including the difference-in-differences regression equation in a LaTeX format: 

*Investigating how microfinance affects small business growth differently in developing and developed areas. 

*Analysing success factors of microfinance, such as loan repayment rates and the economic climate.

*Equation: Change in Y equals alpha plus beta1 times (Microfinance Access times Post-Int Period) plus beta2 times Controls plus epsilon, where Change in Y represents business growth change, and the Post-Int Period is the time after microfinance implementation.

*Suggesting policy improvements for more effective microfinance programs, taking into account these regional differences.

RESPONSE:

This research aims to explore the differential impact of microfinance on small business growth across developing and developed regions. By employing a difference-in-differences regression approach, we seek to understand how microfinance contributes to the success of small enterprises, considering factors such as loan repayment rates and the prevailing economic conditions. The core of our analysis is encapsulated by the following regression equation:

\[

\Delta Y = \alpha + \beta_1 (\text{Access} \times \text{Post-Int Period}) + \beta_2 (\text{Controls}) + \varepsilon

\]

[Or in LaTeX format:]

In this equation, $\Delta Y$ represents the change in business growth, while the interaction term between Microfinance Access and the Post-Intervention Period captures the effect of microfinance implementation over time. Control Variables are included to account for other factors that may influence business growth. Through this analysis, we aim to identify the key drivers of microfinance success and propose policy recommendations that could enhance the effectiveness of microfinance programs, with a particular focus on addressing the unique challenges and opportunities in different regional contexts.

MODEL:

GPT-4 via the ChatGPT Plus web interface

Editing text  ⚫

LLMs are beneficial for editing research, as they can correct grammatical and spelling mistakes, and enhance style, clarity, or simplicity in writing (prompt #1). This feature is especially helpful for non-native English speakers aiming to improve and refine their writing. LLMs can also explain their edits, helping writers learn from corrections (prompt #2). This feature has seen significant improvement from GPT-3.5 to GPT-4. Finally, LLMs can adapt text to different comprehension levels, making it accessible to a broader range of readers. 

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Can you correct the following sentence?

These paper describe 5 way in which develloping countries can imprve tax revenue mobilizatio

RESPONSE:

This paper describes 5 ways in which developing countries can improve tax revenue mobilization.

MODEL:

gpt-4-0613 via OpenAI API (temperature=0)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

Can you explain the main corrections you made?

RESPONSE:

Sure, here are the corrections I made:

1. "These paper" was changed to "This paper" because 'paper' is singular.

2. "describe" was changed to "describes" to agree with the singular subject 'paper'.

3. "way" was changed to "ways" because there are multiple ways being described.

4. "develloping" was corrected to "developing" to fix the spelling error.

5. "imprve" was corrected to "improve" to fix the spelling error.

6. "mobilizatio" was corrected to "mobilization" to fix the spelling error.

MODEL:

gpt-4-0613 via OpenAI API (temperature=0)

Writing and editing

Evaluating text  ⚫

LLMs are also adept at evaluating text for style or clarity and identifying areas for improvement. They can effectively rewrite text to address these shortcomings. Additionally, LLMs can be prompted to analyze text for specific aspects, such as identifying the most challenging arguments to follow, evaluating the overall clarity and coherence, and pinpointing any ambiguities present. Prompts #1 and #2 below illustrate these capabilities.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Give a concise list of the top 5 stylistic shortcomings of the following abstract: 

In the wake of the COVID-19 pandemic, several countries enacted tax and social protection measures to help mitigate the economic hardship faced by individuals and households. This experience underscores the need to better understand the impact of such programmes on incomes and poverty during crises, especially in low- and middle-income countries (LMICs) where they are most needed. This paper reviews existing empirical literature on the subject, conducting a scoping review on quantitative studies published between 2000 and 2022. Following a structured selection approach, we identify 39 studies about the role of social protection and taxation in LMICs during periods of crisis. The results reveal that LMICs often enact both vertical and horizontal expansions of existing unconditional cash transfer schemes during crises, although the applications vary by geography. Our research also suggests a sharp increase in recent studies because of the COVID-19 pandemic, and a large variaety of methods used to study the subject. In terms of effects, social protection can help cushion households against crises, but the effect size depends on the policy adopted, country context, and type of crisis. The minimal empirical evidence about tax policy studies do not enable us to draw conclusions. 

[This is an early abstract from this WIDER Working Paper, titled "The role of social protection and tax policies in cushioning crisis impacts on income and poverty in low- and middle-income countries: A rapid scoping review".]

RESPONSE:

1. Redundancy: The abstract contains redundant phrases such as "during crises" and "during periods of crisis," which could be streamlined for conciseness.

2. Vagueness: The term "variaety" is a misspelling of "variety," and the abstract does not specify what is meant by "vertical and horizontal expansions," which may confuse readers unfamiliar with the terminology.

3. Lack of specificity: The abstract mentions "the role of social protection and taxation in LMICs during periods of crisis" but does not specify which crises, other than COVID-19, are being considered.

4. Passive voice: The phrase "a scoping review on quantitative studies published between 2000 and 2022" uses passive voice, which can make the writing less engaging. Active voice could make the sentence more direct.

5. Incomplete information: The abstract states that "the minimal empirical evidence about tax policy studies do not enable us to draw conclusions," which is grammatically incorrect (should be "does not enable") and leaves the reader wondering about the nature of the tax policy studies and why they are insufficient.

MODEL:

gpt-4-1106-preview via OpenAI API (temperature=0)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

I asked you to provide a concise list of the top 5 stylistic shortcomings of the following abstract: [as above]. This is your response: [as above]. Can you rewrite the abstract to correct these shortcomings?

RESPONSE:

Certainly! Here's a revised version of the abstract that addresses the identified stylistic shortcomings:

"In response to the economic challenges of the COVID-19 pandemic, various countries implemented tax and social protection measures to support individuals and households. Understanding the impact of these programs on income and poverty is crucial, particularly in low- and middle-income countries (LMICs), where they are most needed. This paper presents a scoping review of quantitative studies published from 2000 to 2022 on this topic. We employed a structured selection method to analyze 39 studies focused on social protection and taxation in LMICs during crises, including but not limited to COVID-19. Our findings indicate that LMICs commonly expand unconditional cash transfer schemes in response to crises, with the nature of these expansions differing by region. The surge in research since the COVID-19 outbreak has employed a diverse range of methods. We found that social protection measures can buffer households against crises, with effectiveness varying based on policy design, country context, and crisis type. However, limited empirical data on tax policy prevent conclusive insights."

This rewrite aims to make the abstract more concise, clear, specific, and engaging, while correcting grammatical errors.

MODEL:

GPT-4 via the ChatGPT Plus web interface

Generating titles and tweets  ⚫

LLMs are effective in generating creative and attention-grabbing headlines or titles for papers, as demonstrated through an example related to the abstract discussed under 'Evaluating text'. When tasked with this, the LLM produces titles that are both relevant and stylistically appropriate for the subject (prompt #1). The titles produced in a subsequent request (prompt #2) do however lack adequate descriptiveness.

Additionally, abstracts can be provided to LLMs to promote a paper, for instance by writing a catchy tweet or producing other promotional materials (prompt #3).

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Brainstorm two or three catchy titles for a paper with the same abstract. [See prompt #1 under 'Evaluating text'].

RESPONSE:

1. "Navigating Crises: A Review of Social Protection and Tax Measures in Low- and Middle-Income Countries"

2. "Economic Resilience Amidst Pandemics: The Role of Social Protection and Taxation in LMICs"

3. "Unraveling the Impact of Social Protection and Tax Policies in LMICs During Crises: A Scoping Review"

[Note that the original title of this WIDER WP, with a slightly revised abstract, was "The role of social protection and tax policies in cushioning crisis impacts on income and poverty in low- and middle-income countries: A rapid scoping review".]

MODEL:

gpt-4-0613 via OpenAI API (temperature=0)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

Make them a bit more sober and keep them short

RESPONSE:

1. "Social Protection in Crisis: A Review"

2. "Tax Measures and Crises in LMICs"

3. "LMICs: Social Protection and Taxation During Crises"

MODEL:

gpt-4-0613 via OpenAI API (temperature=0)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #3:

Write a tweet chain of three four tweets that summarize the following text. Use simple sentences and make the tweets catchy. Also number them, using "(1/4)", "(2/4)", etc. at the end of each tweet in the chain: [See prompt #1 under 'Evaluating text'.]

RESPONSE:

This is Bing. I can try to write a tweet chain that summarizes the text. Here is what I came up with:

How do tax and social protection policies affect incomes and poverty in low- and middle-income countries during crises? A new paper reviews the evidence. (1/4)

The paper covers 39 studies on the topic, published between 2000 and 2022. It finds that many LMICs expand their cash transfer programmes during crises, but the effects vary by region and type of crisis. (2/4)

The paper also shows a surge in research on this topic due to the COVID-19 pandemic, using a range of methods. However, there is little empirical data on the impact of tax policies during crises in LMICs. (3/4)

The paper highlights the need for more rigorous and comprehensive studies on the role of tax and social protection in mitigating the economic hardship caused by crises in LMICs. Read the full paper here: [link] (4/4)

MODEL:

GPT-4 Turbo via Microsoft Copilot (Creative mode)

Background research

Summarizing text  ⚫

LLMs like GPT-4 excel in summarizing text, efficiently distilling key points from a given passage. This proficiency is evident in prompt #1, where the model succinctly encapsulates the essential elements of a WIDER Working Paper based on its abstract. As detailed in the section 'Feedback on a complete paper', GPT-4's capability currently extends to processing around 25,000 words. This enables it to also effectively handle longer texts, as demonstrated in prompt #2. GPT-4 can retain and analyze the full text of a typical paper, enabling detailed responses and follow-ups about the content. Additionally, it can discuss specific evidence supporting key points, summarize main conclusions, explore the most innovative ideas, and explain findings from specific sections of the paper.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Summarize the following text in one sentence: 

In the wake of the COVID-19 pandemic, several countries enacted tax and social protection measures to help mitigate the economic hardship faced by individuals and households. This experience underscores the need to better understand the impact of such programmes on incomes and poverty during crises, especially in low- and middle-income countries (LMICs) where they are most needed. This paper reviews existing empirical literature on the subject, conducting a scoping review on quantitative studies published between 2000 and 2022. Following a structured selection approach, we identify 39 studies about the role of social protection and taxation in LMICs during periods of crisis. The results reveal that LMICs often enact both vertical and horizontal expansions of existing unconditional cash transfer schemes during crises, although the applications vary by geography. Our research also suggests a sharp increase in recent studies because of the COVID-19 pandemic, and a large variaety of methods used to study the subject. In terms of effects, social protection can help cushion households against crises, but the effect size depends on the policy adopted, country context, and type of crisis. The minimal empirical evidence about tax policy studies do not enable us to draw conclusions. 

[This is an earlier abstract from this WIDER Working Paper, titled "The role of social protection and tax policies in cushioning crisis impacts on income and poverty in low- and middle-income countries: A rapid scoping review".]

RESPONSE:

The paper reviews 39 studies on the role of social protection and taxation in low- and middle-income countries (LMICs) during crises, revealing that these countries often expand existing cash transfer schemes, but the effectiveness varies based on the policy, country, and type of crisis, while there is insufficient empirical evidence to draw conclusions about tax policies.

MODEL:

gpt-4-0613 via OpenAI API (temperature=0)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

Provide a short summary of this paper [Attachment: wp2023-6-role-social-protection-tax-policies-crisis.pdf]

RESPONSE:

The paper titled "The Role of Social Protection and Tax Policies in Cushioning Crisis Impacts on Income and Poverty in Low- and Middle-Income Countries" is a comprehensive review of empirical literature on the impact of social protection and taxation during crises, particularly in low- and middle-income countries (LMICs). It analyzes studies from 2000 to 2022, focusing on quantitative research. The paper reveals that LMICs often expand unconditional cash transfer schemes during crises, but effects vary based on policy, context, and crisis type. There's limited empirical evidence on tax policies' impact. The study contributes to understanding the role of these policies in crisis response, highlighting a sharp increase in related studies due to the COVID-19 pandemic.

MODEL:

GPT-4 via the ChatGPT Plus web interface (incl. an uploaded paper) 

Literature search ◐ ( in Korinek, 2023)

Language models have limitations in searching and referencing academic literature. They sometimes create fictional references that seem credible but do not actually exist. While GPT-4 has improved over its predecessor GPT-3.5 (which is free) in providing more accurate references, it is still crucial to verify any references generated by these models for accuracy. In prompt #1, GPT-3.5 fails to find the correct reference. In prompt #2, GPT-4 with online access does pinpoint the correct reference, while also identifying another one that matches the query, both with links. 

Prompt #3 uses the Consensus GPT via ChatGPT to identify papers that address a specific research question. Consensus, a GPT available via ChatGPT Plus, allows for searching and synthesizing results from 200 million academic papers. Several similar tools, including also standalone applications, are available as of early 2024. See also 'Feedback' and 'Drafting academic text'. 

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Provide citation information for Gibson Masumbu's book chapter with Kristi Mahrt from 2016.

RESPONSE:

I'm sorry, but I don't have access to specific information about Gibson Masumbu's book chapter as my knowledge is limited to what's available up until January 2022, and I don't have internet access to search for current or specific publications. If you have any other questions or need information on a different topic, feel free to ask, and I'll do my best to assist you.

MODEL:

GPT-3.5 via the ChatGPT Plus web interface (free, no online search)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

Provide citation information for Gibson Masumbu's book chapter with Kristi Mahrt from 2016, finding the info from online if needed.

RESPONSE:

Based on a quick search, here's what I discovered.

Gibson Masumbu and Kristi Mahrt co-authored a book chapter titled "Assessing Progress in Welfare Improvements in Zambia: A Multidimensional Approach" in 2016. This chapter is part of the book "Growth and Poverty in Sub-Saharan Africa," published by Oxford University Press. The publication details can be found on the UNU-WIDER website and EconBiz. Additionally, they also contributed to another chapter titled "Estimating Multidimensional Poverty in Zambia" in the book "Measuring Poverty and Wellbeing in Developing Countries," published in 2017 by Oxford University Press. This information is also available on the UNU-WIDER website.

For more details, you can visit the UNU-WIDER page for "Assessing Progress in Welfare Improvements in Zambia" here and "Estimating Multidimensional Poverty in Zambia" here, as well as the EconBiz entry here.

MODEL:

GPT-4 via the ChatGPT Plus web interface (incl. online search) 

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #3:

Find 5 papers from top economics journals that analyse the effects of conditional cash transfer programs on poverty or inequality.

RESPONSE:

Here are five papers from top economics journals analyzing the effects of conditional cash transfer programs on poverty or inequality:

1. "Inequality and conditionality in cash transfers: Demographic transition and economic development" by Koji Kitaura & K. Miyazawa (2021) in Economic Modelling. This study examines how conditionality in cash transfers affects inequality and poverty, finding that conditional cash transfer programs may promote short-term escape from poverty traps but could worsen income inequality by increasing fertility rates in low-income groups (Kitaura & Miyazawa, 2021).

2. "The Brazilian cash transfer program (Bolsa Família): A tool for reducing inequalities and achieving social rights in Brazil" by José Anael Neves et al. (2020) in Global Public Health. This paper analyses Bolsa Família, the world's largest conditional cash transfer program, noting its role in reducing income inequalities and expanding access to health and education, though it does not significantly interrupt the intergenerational cycle of poverty (Neves et al., 2020).

3. "Cash transfers, social protection and poverty reduction" by R. Slater (2011) in International Journal of Social Welfare. This article evaluates different types of cash transfers in various contexts, focusing on their appropriateness and cost-effectiveness in reducing poverty and inequality (Slater, 2011).

4. "Cash transfer programmes, income inequality and regional disparities. The case of the Uruguayan Asignaciones Familiares" by Verónica Amarante et al. (2011) in Cambridge Journal of Regions, Economy and Society. The study microsimulates the effects of Uruguay's Asignaciones Familiares program, showing that while it positively impacts child education and poverty reduction, it does not significantly affect regional disparities and income inequality (Amarante et al., 2011).

5. "Brazil's conditional cash transfer program associated with declines in infant mortality rates" by A. Shei (2013) in Health affairs. This study finds that Brazil's Bolsa Família program was associated with a significant reduction in infant mortality rates, suggesting these programs can improve child health and reduce health inequalities (Shei, 2013).

These papers provide a comprehensive overview of the varied impacts of conditional cash transfer programs on poverty and inequality, with a focus on different geographical contexts and program specifics.

MODEL:

Consensus GPT via the ChatGPT Plus web interface (incl. search and links to Consensus)

Background research

Formatting references  ⚫

LLMs, even basic ones like GPT-3.5, are adept at formatting citations once they have the necessary details. They can efficiently convert entire bibliographies into different styles, e.g. into BibTeX format (prompt #1) or from other styles to APA style (prompt #2). For extensively cited works, a simple prompt like “bibtex reference for mundell 1961” can generate accurate references. However, it is always recommended to verify the accuracy of less well-known papers.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

I'm looking for the first study you mention [prompt #2 under 'Literature search'], the book chapter titled "Assessing Progress in Welfare Improvements in Zambia". Convert this reference I found to bibtex format:

Gibson Masumbu and Kristi Mahrt: Assessing Progress in Welfare Improvements in Zambia: A Multidimensional Approach. In book: Growth and Poverty in Sub-Saharan Africa. Editors: Channing Arndt, Andy McKay, Finn Tarp. April 2016, Oxford University Press, Number of Pages 528

RESPONSE:

To convert the provided reference into BibTeX format, you would format it as follows:

@InBook{Masumbu2016,

  author    = {Gibson Masumbu and Kristi Mahrt},

  title     = {Assessing Progress in Welfare Improvements in Zambia: A Multidimensional Approach},

  chapter   = {},

  pages     = {},

  publisher = {Oxford University Press},

  year      = {2016},

  editor    = {Channing Arndt and Andy McKay and Finn Tarp},

  booktitle = {Growth and Poverty in Sub-Saharan Africa},

  month     = apr,

  note      = {Number of Pages 528}

}

In this format:

Please replace {} in the chapter and pages fields with the specific chapter number and page range if that information is available.

MODEL:

GPT-4 via the ChatGPT Plus web interface (incl. online search) 

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

Format the following citations in APA style:

Lawlor, K., Handa, S., and Seidenfeld, D. (2019). ‘Cash Transfers Enable Households to Cope with Agricultural Production and Price Shocks: Evidence from Zambia’. Journal of Development Studies, 55(2): 209–26.

Gibson Masumbu and Kristi Mahrt: Assessing Progress in Welfare Improvements in Zambia: A Multidimensional Approach. In book: Growth and Poverty in Sub-Saharan Africa. Editors: Channing Arndt, Andy McKay, Finn Tarp. April 2016, Oxford University Press, Number of Pages 528

RESPONSE:

Here are the provided citations formatted in APA style:

1. Lawlor, K., Handa, S., & Seidenfeld, D. (2019). Cash transfers enable households to cope with agricultural production and price shocks: Evidence from Zambia. Journal of Development Studies, 55(2), 209-226.

2. Masumbu, G., & Mahrt, K. (2016). Assessing progress in welfare improvements in Zambia: A multidimensional approach. In C. Arndt, A. McKay, & F. Tarp (Eds.), Growth and poverty in Sub-Saharan Africa (pp. [pages of the chapter]). Oxford University Press.

In APA style, the authors' names are separated by commas and an ampersand (&) before the last author. The title of the article is in sentence case and not italicized, while the title of the journal or book is italicized. The volume number of the journal is italicized, but the issue number is not, and they are separated by a parenthesis. For book chapters, include the editors' initials and surnames, followed by "Eds." in parentheses, and the book title is italicized. The chapter's specific page range should be included if available.

MODEL:

GPT-4 via the ChatGPT Plus web interface

Translating text  ⚫

Jiao and colleagues (2023) find that LLMs show performance comparable to commercial translation tools for common languages. During their pretraining phase, LLMs develop conceptual representations that transcend individual languages, aiding in effective translation between languages. This said, the efficiency drops for languages with limited digital resources and fewer available translations, highlighting a disparity in performance based on the availability of language data. Translation capabilities of GPT-4 are demonstrated by supplying the same prompt to the OpenAI API (prompt #1) and Microsoft Copilot (prompt #2).

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Translate the following sentence into Spanish, German, and Finnish: This book comprehensively evaluates trends in living conditions in 16 major sub-Saharan African countries, corresponding to nearly 75% of the total population.

RESPONSE:

Spanish: Este libro evalúa de manera integral las tendencias en las condiciones de vida en 16 países importantes del África subsahariana, correspondientes a casi el 75% de la población total.

German: Dieses Buch bewertet umfassend die Trends in den Lebensbedingungen in 16 wichtigen Ländern südlich der Sahara, die fast 75% der Gesamtbevölkerung entsprechen.

Finnish: Tämä kirja arvioi kattavasti elinolosuhteiden suuntauksia 16:ssa merkittävässä Saharan eteläpuolisessa Afrikan maassa, mikä vastaa lähes 75% koko väestöstä.

MODEL:

gpt-4-0613 via OpenAI API (temperature=0)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

[Same as prompt #1, but here using Microsoft Copilot.]

RESPONSE:

Sure, here are the translations:

MODEL:

GPT-4 Turbo via Microsoft Copilot (Precise mode)

Background research

Explaining concepts ◐

LLMs serve as effective tutors, clarifying various economic concepts and ideas for both learners and advanced researchers. Using the latest LLMs is essential for obtaining high-quality explanations. Follow-up questions enhance personalized learning. However, despite their generally accurate responses, LLMs can sometimes confidently provide incorrect information, even on basic topics. This poses a challenge as students might rely on them despite occasional inaccuracies. Earlier LLM versions are generally more prone to "hallucinations" or false answers. Two simple examples are shown in prompts #1 and #2. Prompt #3 asks a more nuanced (and policy-relevant) question about setting commodity taxes in a developing country. 

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

What is the second theorem of welfare economics?

RESPONSE:

The second theorem of welfare economics states that any Pareto efficient outcome can be achieved by some competitive equilibrium, given appropriate redistribution of resources. This means that the government can improve market efficiency through taxation and redistribution, without distorting prices or productivity. It essentially suggests that the free market, with the right amount of intervention, can lead to a socially desirable distribution of resources.

MODEL:

gpt-4-0613 via OpenAI API (temperature=0)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

What are instrumental variables useful for? Explain and demonstrate with a very simple example, including formulas. Produce the whole answer in LaTeX format, which I can copy directly into a LaTeX editor.

RESPONSE:

[The LaTeX input generated is below in green, followed by LaTeX output.]

\documentclass{article}

\usepackage[utf8]{inputenc}

\title{Understanding Instrumental Variables}

\author{}

\date{}

\begin{document}

\maketitle

Instrumental variables (IV) are used in statistical analysis to estimate causal relationships when controlled experiments are not feasible and there is a risk of endogeneity. Endogeneity occurs when an explanatory variable is correlated with the error term in a regression model, leading to biased and inconsistent estimates. IVs help to address this issue by providing a source of variation that affects the dependent variable only through the independent variable of interest.

\section{Simple Example}

Suppose we want to estimate the effect of education on earnings. However, education might be endogenous (e.g., due to unobserved factors like ability or family background). An instrumental variable can help isolate the causal effect of education on earnings.

Let $Y$ be earnings, $X$ be years of education, and $Z$ be an instrumental variable. The basic idea is to use $Z$ to get a variation in $X$ that is not correlated with the error term in the earnings equation. A common choice for $Z$ could be proximity to a college.

The model can be specified in two stages:

\begin{enumerate}

    \item First Stage: $X = \alpha_0 + \alpha_1 Z + u$, where $u$ is the error term.

    \item Second Stage: $Y = \beta_0 + \beta_1 \hat{X} + \varepsilon$, where $\hat{X}$ is the predicted value of $X$ from the first stage, and $\varepsilon$ is the error term.

\end{enumerate}

The coefficient $\beta_1$ gives the estimated causal effect of education on earnings. For the IV estimation to be valid, two key conditions must be met:

\begin{itemize}

    \item Relevance: The instrumental variable $Z$ must be correlated with the endogenous explanatory variable $X$. 

    \item Exogeneity: The instrumental variable $Z$ must be independent of the error term $\varepsilon$ in the outcome equation.

\end{itemize}

\end{document}

OUTPUT FROM LATEX:

MODEL:

GPT-4 via the ChatGPT Plus web interface (incl. online search) 

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #3:

How should commodity taxes be set in a developing country context as a part of the overall tax and transfer system? Discuss the normative aspects involved on the basis of public economics theory. Also comment on the relevant empirical information one would need to have to be able to make policy recommendations.

RESPONSE:

Setting commodity taxes in a developing country within the broader framework of its tax and transfer system requires a careful consideration of both normative (theoretical) and empirical (practical) aspects, guided by the principles of public economics.

Normative Aspects:

1. Efficiency: The primary goal is to minimize economic distortions. Commodity taxes should not significantly alter consumer and producer behavior. In theory, taxes should be levied on goods with inelastic demand to minimize distortions.

2. Equity: Taxes should be equitable. This involves considering both vertical equity (higher taxes for those with greater ability to pay) and horizontal equity (similar taxes for people in similar financial situations). In developing countries, this often translates to lower taxes on basic necessities and higher taxes on luxury goods.

3. Revenue Sufficiency: The tax system must generate enough revenue to fund essential public services. This is crucial for developing countries, where public goods and infrastructure need significant investment.

4. Administrative Simplicity and Cost-Effectiveness: The tax system should be easy to administer and understand. Developing countries often face challenges in tax collection due to limited administrative capacity.

5. Flexibility and Stability: The tax system should be adaptable to changing economic circumstances but also provide a stable source of revenue.

Empirical Information Needed:

1. Elasticity of Demand and Supply: Understanding how consumers and producers respond to price changes is crucial. This information helps in estimating the efficiency costs of taxes and in selecting which commodities to tax.

2. Income and Consumption Patterns: Data on how different income groups spend their income is vital. This helps in understanding the distributional impacts of various taxes.

3. Tax Incidence: Who ultimately bears the tax burden (consumers, producers, or others)? Empirical studies can help in understanding the true incidence of commodity taxes.

4. Revenue Data: Historical data on revenue from different taxes helps in estimating the potential yield of new or reformed taxes.

5. Administrative Capacity: Assessing the country's ability to enforce and collect taxes is critical. This includes understanding the infrastructure, technology, and human resources available for tax administration.

6. Compliance and Evasion Rates: Understanding the existing levels of tax compliance and evasion can help in designing more effective and enforceable tax policies.

7. Economic and Social Objectives: Empirical data on the country's economic and social goals (e.g., poverty reduction, public health) is necessary to align the tax policy with these objectives.

8. Market Structure and Competition: The presence of monopolies or oligopolies can affect the efficiency and incidence of commodity taxes.

In summary, setting commodity taxes in a developing country context requires balancing theoretical principles with practical considerations. It's a complex task that requires detailed empirical data to understand the economic environment and the impacts of different tax policies. Public economics theory provides a framework, but empirical data is crucial to tailor policies to the specific context of the country.

MODEL:

GPT-4 via the ChatGPT Plus web interface

Drafting academic text (not in Korinek, 2023)

LLMs have the ability to write initial drafts of literature reviews or other parts of academic work. This said, especially when using older versions, and models that cannot access the internet, there are considerable limitations in searching and referencing academic literature. As noted under 'Literature search', LLMs sometimes create fictional references that seem credible but do not actually exist. 

Newer models with online access, such as Microsoft Copilot with GPT-4 Turbo (prompt #1) and the Consensus GPT (prompts #2a and #2b) do, however, mostly provide correct references. Consensus, a GPT available via ChatGPT Plus, allows for searching and synthesizing results from 200 million academic papers. Several similar tools, including also standalone applications, are available as of early 2024.

Correct references, integrated into text generated by LLMs, can be helpful for writing early drafts of literature reviews.  This said, the references may not always be comprehensive or relevant, and the interpretations can at times be shallow; the outputs should never be used in research as such. It is also important to always verify any references generated by LLMs for accuracy.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

I asked you earlier to create an outline for my research on VAT compliance among informal small businesses in Kigali. You suggested a literature review that covers literature on VAT compliance in Rwanda; insights into the characteristics of informal small businesses; and common challenges faced in VAT compliance.

Write the first draft of this literature review based on your existing knowledge and information you can find online.

RESPONSE:

Based on a quick search, here's what I found.

In drafting a literature review on VAT compliance among informal small businesses in Kigali, several key themes emerge from the existing body of research.

Impact of Electronic Billing Machines (EBMs): The introduction of Electronic Billing Machines (EBMs) in Rwanda has been a significant step towards enhancing VAT compliance. These machines facilitate the tracking of business transactions remotely and improve the taxpayers' experience in filing tax returns. The adoption of a more advanced version of EBMs, known as EBM2, by the Rwanda Revenue Authority led to noticeable increases in reported business turnover, taxable sales, VAT inputs, and VAT due. This suggests that technological advancements in tax collection systems play a crucial role in improving compliance rates​ (https://www.ictd.ac/publication/technology-evolution-tax-compliance-rwanda/).

Economic Factors Influencing VAT Compliance: Various economic factors impact VAT compliance in Rwanda. Studies have shown that higher marginal tax rates might lead to lower compliance, as evasion tends to increase with increasing tax rates. Additionally, the probability of audit, though perceived to have a high impact on compliance, does not always correlate directly with compliance behavior. This implies that factors other than mere economic calculations, such as tax morale and perceived fairness of the tax system, significantly influence compliance (https://www.researchgate.net/publication/292615834_Factors_Affecting_Tax_Compliance_in_Rwanda_An_Empirical_Analysis).

Challenges in Tax Compliance: Despite the implementation of systems like the Self Assessment System in Rwanda, the overall tax compliance, including VAT, remains low. This is partly due to the complexity and instability of tax laws, insufficient taxpayer services, complicated filing and payment procedures, and ineffective enforcement. For successful implementation, a combination of simplified tax laws, efficient services, effective audit programs, and fair penalties are essential (https://www.researchgate.net/publication/292615834_Factors_Affecting_Tax_Compliance_in_Rwanda_An_Empirical_Analysis)​.

Policy Implications and Recommendations: The literature suggests that the Rwanda Revenue Authority should focus on reconciling firms' separate VAT and income tax returns to enhance overall tax compliance. The positive effects of technological tools like EBMs on VAT compliance could potentially extend to income tax compliance if integrated and managed effectively​ (https://www.ictd.ac/publication/technology-evolution-tax-compliance-rwanda/).

This literature review indicates that while technological advancements and economic considerations play crucial roles in shaping VAT compliance, the broader context of the tax system's administration, including law simplicity, taxpayer services, and enforcement mechanisms, significantly impacts compliance behavior. Further research, particularly focusing on the unique characteristics and challenges of informal small businesses in Kigali, would provide deeper insights into this topic.

MODEL:

GPT-4 via the ChatGPT Plus web interface (incl. online access)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2a:

I want to write a literature review for my research on VAT compliance among informal small businesses in Kigali. It will cover several topics, but I would appreciate your help on the subsection on VAT compliance in Rwanda, with possible comparisons to other countries.

Write the first draft of this section of the literature review based on relevant academic literature, where possible from high-impact journals. Make it comprehensive, with several references/sources and sufficient details. No bullet points or titles, just text and sources.

RESPONSE:

The Value Added Tax (VAT) compliance among informal small businesses, particularly in Rwanda, presents a complex and multifaceted issue. Drawing from recent academic research, this literature review aims to explore this topic, providing insights and comparisons with other countries.

In Rwanda, VAT compliance faces several challenges, often due to the complexity of the tax system and the nature of small businesses. A study examining VAT in Rwanda found that inconsistencies in filing behavior among VAT-remitting firms lead to suboptimal usage of electronic billing machines and failures to claim legitimate VAT credits, disproportionately affecting smaller firms (Mascagni, Dom, & Santoro, 2021). This is compounded by taxpayer confusion, fear of audit, and constraints in administrative capacity.

Another study in Rwanda on tax behavior of small firms highlighted that responses to tax instruments diverge from traditional models of compliance, leading to unintended and regressive consequences. Firms often target past liability when paying taxes, influenced by their uncertainty about earnings and enforcement perceptions (Tourek, 2022). This targeting behavior can result in overpayment of taxes, especially among less educated and less profitable entrepreneurs.

Comparing with Tanzania, the introduction of Electronic Fiscal Devices (EFDs) in Tanzania aimed at improving VAT compliance showed mixed results. The study found that EFD compliance was influenced not only by businesses' and customers' perceptions of detection and penalty risks but also by businesses' satisfaction with public services (Fjeldstad et al., 2020).

In South Africa, a study on VAT compliance among small businesses found that changes in VAT rate tend to have a significant impact on VAT registration decisions. This indicates that the magnitude of VAT rate changes influences tax compliance behavior, affecting both compulsory and voluntary registration (Schoeman, Evans, & du Preez, 2022).

Similarly, a Finnish study on the impact of VAT exemption thresholds demonstrated the significant role of compliance costs over tax incentives in influencing entrepreneurs' responses to VAT regulations. This finding underlines the importance of reducing compliance costs to encourage VAT compliance among small businesses (Harju, Matikka, & Rauhanen, 2019).

In conclusion, VAT compliance among informal small businesses in Rwanda, as in many other countries, is influenced by a combination of factors including administrative complexities, tax rate changes, compliance costs, and entrepreneurs' perceptions and behaviors. These factors create a challenging environment for ensuring effective VAT compliance, necessitating tailored approaches that consider the unique circumstances of small businesses and the administrative context in which they operate.

MODEL:

Consensus GPT via the ChatGPT Plus web interface (incl. search and links to Consensus)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2b:

Provide a full bibliography in Chicago style that I can copy-paste directly into the paper. I don't want Consensus links, but doi's.

RESPONSE:

Certainly, here's the full bibliography for your literature review in Chicago style in Chicago style with DOI references:

Fjeldstad, O., Kagoma, C., Mdee, E., Sjursen, I. H., and Somville, V. 2020. "The customer is king: Evidence on VAT compliance in Tanzania." World Development 128: 104841. doi: 10.1016/J.WORLDDEV.2019.104841.

Mascagni, G., Dom, R., and Santoro, F. 2021. "The VAT in Practice: Equity, Enforcement and Complexity." International Tax and Public Finance. doi: 10.19088/ICTD.2021.002.

3. Tourek, G. 2022. "Targeting in tax behavior: Evidence from Rwandan firms." Journal of Development Economics. doi: 10.1016/j.jdeveco.2022.102911.

4. Schoeman, A. H. A., Evans, C. C., and du Preez, H. 2022. "To register or not to register for value-added tax? How tax rate changes can influence the decisions of small businesses in South Africa." Meditari Accountancy Research. doi: 10.1108/medar-05-2021-1309.

5. Harju, J., Matikka, T., and Rauhanen, T. 2019. "Compliance costs vs. tax incentives: Why do entrepreneurs respond to size-based regulations?" Journal of Public Economics. doi: 10.1016/J.JPUBECO.2019.02.003.

Please ensure to cross-check the DOI references for accuracy and completeness according to your research paper's requirements. [Note that doi codes were correct but not links in the original response.]

MODEL:

Consensus GPT via the ChatGPT Plus web interface

Coding coding coding

Writing code ( in Korinek, 2023)

LLMs are highly effective in generating various types of code, encompassing general programming, data handling, and routine tasks. In prompt #1, GPT-4 Turbo is used via Microsoft Copilot to create relatively simple Stata code. While LLMs have been notoriously bad at Stata, no errors are produced here.  An earlier version of Copilot, Bing Chat Enterprise, produced multiple errors from the same prompt. The same code can also be produced without errors e.g. for Python or R (prompt #2). 

In prompt #3, the GPT-4 'All tools' model is used to replicate a well-known economic model. The graphs produced are accurate, simulated "under the hood" using Python. 

This said, for more complex models, the performance of LLMs is not as remarkable. While LLMs serve as valuable tools to streamline many basic programming tasks, they still necessitate human supervision for more complex challenges. Additionally, LLMs prove to be excellent educational aids, guiding users to identify appropriate libraries and functions for specific tasks.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

I have an output dataset from a Vietnamese microsimulation model, stored in "/Users/user/Desktop/vn_2023_std.txt". Help me write Stata code for four tasks: 

1. to insheet the txt file to Stata

2. to obtain a histogram of the distribution of variable 'ils_dispyx' (disposable income), among individuals between the ages of 18 and 64 (age='dag')  

3. to calculate the share of individuals with zero disposable income

4. to calculate average disposable income among informal ('lfo'=0) and formal ('lfo'=1) workers.

Where required, use survey weights (variable 'dwt'). Provide the entire code that I can copy-paste to Stata to accomplish all tasks, with no other text. Avoid using the svyset package, just show how to make the calculations. [Comments in the response are in bold to ease reading.]

RESPONSE:

Here is the Stata code for your tasks:

// Task 1: Insheet the txt file to Stata

insheet using "/Users/user/Desktop/vn_2023_std.txt", clear

// Task 2: Obtain a histogram of the distribution of variable 'ils_dispyx' among individuals between the ages of 18 and 64

histogram ils_dispyx if dag >= 18 & dag <= 64, frequency

// Task 3: Calculate the share of individuals with zero disposable income

gen zero_income = ils_dispyx == 0

egen total_weight = total(dwt)

egen zero_income_weight = total(dwt) if zero_income == 1

gen zero_income_share = zero_income_weight / total_weight

sum zero_income_share

// Task 4: Calculate average disposable income among informal and formal workers

egen informal_weight = total(dwt) if lfo == 0

egen formal_weight = total(dwt) if lfo == 1

egen informal_income = total(ils_dispyx * dwt) if lfo == 0

egen formal_income = total(ils_dispyx * dwt) if lfo == 1

gen avg_informal_income = informal_income / informal_weight

gen avg_formal_income = formal_income / formal_weight

sum avg_informal_income avg_formal_income

Please replace the file path with the actual path of your file if it’s different. Also, make sure that the variable names match those in your dataset. If there are any issues, please let me know! I’m here to help. 😊

MODEL:

GPT-4 Turbo via Microsoft Copilot (Precise mode)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

[Same as prompt #1 but for R]

RESPONSE:

library(ggplot2)

library(dplyr)

# Task 1: Load the data

data <- read.table("/Users/user/Desktop/vn_2023_std.txt", header = TRUE)

# Task 2: Histogram of disposable income for ages 18-64

data %>% 

  filter(dag >= 18, dag <= 64) %>%

  ggplot(aes(x = ils_dispyx, weight = dwt)) +

  geom_histogram(binwidth = 1, fill = "blue", color = "black") +

  labs(title = "Histogram of Disposable Income (Ages 18-64)", x = "Disposable Income", y = "Count")

--> While this is correct, a binwidth of 1 is very small given that the dataset is large (which wasn't specified in the prompt). This can be easily modified by the user.

# Task 3: Share of individuals with zero disposable income

zero_income_share <- sum(data$dwt[data$ils_dispyx == 0]) / sum(data$dwt)

print(paste("Share of individuals with zero disposable income:", zero_income_share))

# Task 4: Average disposable income among informal and formal workers

avg_income_by_lfo <- data %>%

  group_by(lfo) %>%

  summarise(avg_income = weighted.mean(ils_dispyx, dwt, na.rm = TRUE))

print(avg_income_by_lfo)

MODEL:

GPT-4 via the ChatGPT Plus web interface

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #3:

Simulate the Solow growth model and plot the results in a four-chart, including capital, labor, output and capital per worker over time for 100 periods, with all initiated at 1.

RESPONSE:

Here is the simulation of the Solow growth model plotted over 100 periods. The four charts display the evolution of capital stock (K), labor force (L), output (Y), and capital per worker (k) over time. Each of these variables was initialized at 1, and the model was simulated with specific parameters for population growth rate, savings rate, and depreciation rate. The results are visualized to show how these key economic indicators evolve in a hypothetical economy according to the Solow model. ​

MODEL:

GPT-4 via the ChatGPT Plus web interface (using Advanced Data Analysis)

Explaining code ◐

Beyond writing code from English language instructions, as illustrated in previous examples under the 'Writing code' section, LLMs can adeptly perform the inverse task: explaining existing code in plain English, much like a personal tutor. This feature is helpful when working with code from sources where documentation is lacking or unclear. It allows users, regardless of their programming proficiency, to gain a clearer understanding of the code's function and logic.

This functionality aids in educational contexts, providing learners with a deeper insight into the nuances of programming languages and coding practices. In professional settings, where researchers often encounter unfamiliar or legacy code, this feature can significantly expedite the process of code comprehension and debugging.

Prompt #1 explains Python code using Microsoft Copilot, while prompt #2 explains Stata code using GPT-4.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

What does the following function do?

def regression(X, y):

    X = np.hstack((np.ones((X.shape[0], 1)), X))

    XtXinv = np.linalg.inv(np.dot(X.T, X))

    beta1 = np.dot(np.dot(XtXinv, X.T), y)

    residuals = y - np.dot(X, beta1)

    V = np.diag(residuals**2)

    Vinv = np.linalg.inv(V)

    XtVinvXinv = np.linalg.inv(np.dot(X.T, np.dot(Vinv, X)))

    beta2 = np.dot(np.dot(XtVinvXinv, X.T), np.dot(Vinv, y))

    return beta1, beta2

RESPONSE:

The function regression(X, y) performs two types of regression analysis on the input data X and y.

1. Ordinary Least Squares (OLS) Regression: The first part of the function calculates the OLS regression coefficients, beta1. It does this by adding a column of ones to the input matrix X for the intercept term, calculating the inverse of X'X (where ' denotes the transpose), and then multiplying this by X'y to get the OLS estimates.

2. Generalized Least Squares (GLS) Regression: The second part of the function calculates the GLS regression coefficients, beta2. It first calculates the residuals from the OLS regression, and then uses these to estimate the variance-covariance matrix V of the errors. It then inverts this matrix and uses it to weight the observations in the GLS regression. The GLS estimates are calculated similarly to the OLS estimates, but with the addition of the weighting by the inverse of V.

The function returns both the OLS and GLS regression coefficients. Please note that this function assumes that the errors in the regression model are heteroskedastic, i.e., the variance of the errors may change with the level of the independent variable(s). If this assumption is not met, the GLS estimates may not be efficient.

Also, this function does not handle cases where X'X or V are not invertible (i.e., they are singular or nearly singular). In such cases, the function may return incorrect results or fail due to a computational error.

Please ensure that these assumptions hold when using this function, or consider adding error handling code to manage these situations. If you need further assistance, feel free to ask! 😊

MODEL:

GPT-4 Turbo via Microsoft Copilot (Precise mode)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

Explain this Stata code [comments are in bold to ease reading]:

* Folder with output datasets:

global output "/Users/user/Desktop"

* Years, definitions, and income concepts:

global years = "2022 2023"   

global Nyear: word count ${years}

global definitions = "dispyx con" 

global Ndef: word count ${definitions}

* Poverty rate matrix:

mat povR = J(${Ndef},${Nyear},.)

mat colnames povR = ${years}

mat rownames povR = "Dispable income" "Consumption"

matnames povR

mat list povR

* Loop to calculate poverty rates:

local col = 1

foreach year of global years {

noi di in r "*** System: `year' ***"

insheet using "${output}//vn_`year'_std.txt", clear

* Equivalence scale (national definition)

bysort idhh: gen ses_hh = sum(ses)

* Household level equivalised income and consumption

quietly foreach def of global definitions {

bysort idhh: egen temp = total(ils_`def')

gen `def'_equ = temp / ses_hh

gen `def'_neg = `def'_equ

replace `def'_neg = 0 if `def'_neg<0

drop temp

}

* Matrix:

local row = 1

foreach def of global definitions {

povdeco `def'_neg [aw=dwt], varpline(spl) 

mat povR[`row',`col'] = r(fgt0) * 100

local row = `row'+ 1

local col = `col'+1

}

 * Show matrix 

 matrix list povR

RESPONSE:

Explanation of Income and Consumption Equivalization:

Income and consumption are equivalized to account for differences in household size and composition, ensuring a fair comparison across households. In this code, equivalization is achieved by:

1. Calculating an equivalence scale for each household, typically based on the number of household members and their ages, reflecting their needs (e.g., children might be given a lower weight than adults).

2. Summing up income or consumption for each household (using total(ils_def')` in the code).

3. Dividing the total household income or consumption by the household's equivalence scale (as seen in gen def'_equ = temp / ses_hh`), resulting in an equivalized figure.

This equivalized income or consumption represents an adjusted value that accounts for household size and composition, allowing for more accurate comparisons of economic well-being across different households.

MODEL:

GPT-4 via the ChatGPT Plus web interface

Coding coding coding

Translating code  ⚫

LLMs can translate code from one programming language to another. This skill is particularly useful for porting a project to a different platform or integrating code snippets from online forums like StackExchange that are in a different programming language. For example, an LLM can translate a Stata script into R (prompt #1).

Current LLMs excel at translating shorter code segments across popular programming languages, making them a reliable tool for quick conversions. However, for longer or more complex code, iteration and human oversight are often necessary to ensure accuracy and functionality. For instance, prompt #1 (with rather complex code) requires specifying key details about the original code/data and is still not perfect (see comments in red).

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Translate this Stata code to R [Code comments are formatted in bold to ease reading; the example is the same as in prompt #2 under 'Explaining code']:

* Folder with output datasets:

global output "/Users/user/Desktop"

* Years, definitions, and income concepts:

global years = "2022 2023"   

global Nyear: word count ${years}

global definitions = "dispyx con" 

global Ndef: word count ${definitions}

* Poverty rate matrix:

mat povR = J(${Ndef},${Nyear},.)

mat colnames povR = ${years}

mat rownames povR = "Dispable income" "Consumption"

matnames povR

mat list povR

* Loop to calculate poverty rates:

local col = 1

foreach year of global years {

noi di in r "*** System: `year' ***"

insheet using "${output}//vn_`year'_std.txt", clear

* Equivalence scale (national definition)

bysort idhh: gen ses_hh = sum(ses)

* Household level equivalised income and consumption

quietly foreach def of global definitions {

bysort idhh: egen temp = total(ils_`def')

gen `def'_equ = temp / ses_hh

gen `def'_neg = `def'_equ

replace `def'_neg = 0 if `def'_neg<0

drop temp

}

* Matrix:

local row = 1

foreach def of global definitions {

povdeco `def'_neg [aw=dwt], varpline(spl) 

mat povR[`row',`col'] = r(fgt0) * 100

local row = `row'+ 1

local col = `col'+1

}

 * Show matrix 

 matrix list povR

RESPONSE:

To translate your Stata code to R, I'll follow the overall structure of your code and implement the equivalent logic in R. The key tasks are reading the data, calculating household-level equivalized income and consumption, and then computing the poverty rates using individual-specific poverty lines ('spl').

# Set output directory and parameters

output <- "/Users/user/Desktop"

years <- c("2022", "2023")

definitions <- c("dispyx", "con")

# Initialize poverty rate matrix

povR <- matrix(NA, nrow = length(definitions), ncol = length(years))

rownames(povR) <- c("Disposable income", "Consumption")

colnames(povR) <- years

# Load necessary libraries

library(dplyr)

# Loop to calculate poverty rates

for (year in years) {

  cat("*** System:", year, "***\n")

  # Read the data

  data <- read.table(paste0(output, "/vn_", year, "_std.txt"), header = TRUE)

  # Equivalence scale (national definition)

  data <- data %>% 

    group_by(idhh) %>% 

    mutate(ses_hh = sum(ses))

  # Loop through definitions

  for (def in definitions) {

    # Household level equivalised income and consumption

    data <- data %>%

      group_by(idhh) %>%

      mutate(!!paste0(def, "_equ") := sum(!!as.symbol(paste0("ils_", def))) / ses_hh) %>%

      ungroup() %>%

      mutate(!!paste0(def, "_neg") := pmax(!!as.symbol(paste0(def, "_equ")), 0))

    # Calculate poverty rate based on individual-specific poverty line 'spl'

    pov_rate <- mean(data[[paste0(def, "_neg")]] < data$spl) * 100

    # Store the result in the matrix

    povR[def == definitions, year == years] <- pov_rate

  }

}

# Show matrix

print(povR)

--> The code is largely correct, but the poverty rate calculation does not include weights. One correct specification uses the stats package as follows:
    pov_rate <- stats::weighted.mean(data[[paste0(def, "_neg")]] < data$spl, w=data$dwt) * 100

MODEL:

GPT-4 via the ChatGPT Plus web interface

Debugging code ◐

One of the most frustrating parts of programming is to debug code. LLMs also exhibit useful capabilities in this domain. They can not only help to interpret error messages but also verify code. 

Prompt #1 contains a typical error message from Stata that is obtained when a variable that I'm trying to create has already been defined. The LLM recognizes the source of the error, explains it, and revises the code. Prompt #2 illustrates another Stata error that is correctly explained and rectified.

LLMs can help to catch a wide variety of mistakes in code, including typos, required packages that are missing, and indices that are mixed up.  They can also provide advice when one describes how a program malfunctions. This said, LLMs are less capable of detecting high-level mistakes in the algorithms underlying code.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

I received this error message in Stata:

variable temp already defined

r(110);

Temp is created here:
quietly foreach def of global definitions {

    bysort idhh: egen temp = total(ils_`def')  

    gen `def'_equ = temp / ses_hh       

    gen `def'_neg = `def'_equ                  

    replace `def'_neg = 0 if `def'_neg < 0

}

How can I correct the error?

RESPONSE:

The error "variable temp already defined" occurs because the variable temp is being created in each iteration of the loop without being dropped at the end of each iteration. In Stata, if a variable named temp already exists, the egen command will not overwrite it, leading to this error.

To correct this error, you should either drop the temp variable at the end of each loop iteration or create a uniquely named temporary variable for each iteration. The first approach is simpler and keeps the code clean. Here's how you can modify your code:

quietly foreach def of global definitions {

    bysort idhh: egen temp = total(ils_`def')  

    gen `def'_equ = temp / ses_hh       

    gen `def'_neg = `def'_equ                  

    replace `def'_neg = 0 if `def'_neg < 0

    drop temp

}

In this revised code, drop temp is added at the end of the loop body. This ensures that the temp variable is deleted after each iteration, allowing a new temp variable to be created in the next iteration without any naming conflict.

MODEL:

GPT-4 via the ChatGPT Plus web interface

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

I received this error message in Stata:

option if not allowed

r(198);

Option 'if' is only used here:
quietly foreach def of global definitions {

    bysort idhh: egen temp = total(ils_`def')  

    gen `def'_equ = temp / ses_hh       

    gen `def'_neg = `def'_equ                  

    replace `def'_neg = 0, if `def'_neg < 0

    drop temp

}

How can I correct the error?

RESPONSE:

The error you're encountering in Stata, "option if not allowed", is due to the incorrect usage of the if qualifier with the replace command. In Stata, the if condition is used directly without a comma when working with commands like replace. The correct syntax is to place the if condition right after the command, without a comma.

Here's the corrected portion of your code:

quietly foreach def of global definitions {

    bysort idhh: egen temp = total(ils_`def')  

    gen `def'_equ = temp / ses_hh       

    gen `def'_neg = `def'_equ                  

    replace `def'_neg = 0 if `def'_neg < 0

    drop temp

}

In this revised section, note the replace command:

    replace `def'_neg = 0 if `def'_neg < 0

This will correctly replace the values of def'_neg with 0 wherever def'_neg is less than 0, without causing any syntax errors.

MODEL:

GPT-4 via the ChatGPT Plus web interface

Coding coding coding

Creating synthetic data(not in Korinek, 2023)

LLMs are adept at writing syntax in several programming languages to generate synthetic data. This is demonstrated in the prompts below, following up on the example under 'Research strategy'. Prompts #1 and #2 ask for equivalent code in Stata and R, respectively. The outputs work as expected.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

I’d like to estimate the causal effect of a national school feeding programme in Viet Nam, implemented in 36 provinces in 2018, on school enrollment. I will soon get access to data on school enrollment rates in all of the 96 provinces from 2014 to 2021, along with information on household income and average parental education. You suggested a Difference-in-Differences (DiD) approach as a suitable research strategy. Since I don't have the actual data yet, could you please provide me with Stata code to generate a synthetic dataset that I can use to test my methods? You can assume 10 schools per province. [Comments are in bold to ease reading, and only the syntax from the response is shown.]

RESPONSE:

clear

set obs 7680 // 96 provinces * 10 schools * 8 years

// Generate province, school, and year identifiers

egen province = seq(), block(80) to(96)

egen school = seq(), block(8) to(10)

egen year = seq(), from(2014) to(2021)

// Generate treatment variable

gen treated = 0

replace treated = 1 if province <= 36 & year >= 2018 // Treatment starts in 2018

// Generate outcome variable (school enrollment rate)

gen enrollment = rnormal(80, 10) // Assume a mean of 80 and sd of 10

replace enrollment = enrollment + treated*5 // Assume treatment increases enrollment by 5 percentage points

// Generate control variables

gen income = rnormal(10000, 2000) // Assume a mean income of 10,000 and sd of 2,000

gen avg_parent_education = rnormal(12, 2) // Assume a mean of 12 years and sd of 2 years

// Add error term to outcome variable

replace enrollment = enrollment + rnormal(0, 2) // Assume a mean of 0 and sd of 2

// View the data

browse

MODEL:

GPT-4 Turbo via Microsoft Copilot (Precise mode)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

I’d like to estimate the causal effect of a national school feeding programme in Viet Nam, implemented in 36 provinces in 2018, on school enrollment. I will soon get access to data on school enrollment rates in all of the 96 provinces from 2014 to 2021, along with information on household income and average parental education. You suggested a Difference-in-Differences (DiD) approach as a suitable research strategy. Since I don't have the actual data yet, could you please provide me with R code to generate a synthetic dataset that I can use to test my methods? You can assume 10 schools per province. [Comments are in bold to ease reading, and only the syntax from the response is shown.]

RESPONSE:

# Load necessary library

library(dplyr)

# Set seed for reproducibility

set.seed(123)

# Define parameters

n_provinces <- 96

n_schools <- 10

n_years <- 8 # 2014 to 2021

# Generate data

data <- expand.grid(province = 1:n_provinces,

                    school = 1:n_schools,

                    year = 2014:(2014 + n_years - 1))

# Add treatment variable (1 if province <= 36 and year >= 2018, 0 otherwise)

data <- data %>%

  mutate(treatment = ifelse(province <= 36 & year >= 2018, 1, 0))

# Add outcome variable (school enrollment rate), effect of treatment is assumed to be 5 percentage points

data <- data %>%

  mutate(enrollment_rate = rnorm(nrow(data), mean = 80, sd = 10) + treatment * 5)

# Add control variables

data <- data %>%

  mutate(household_income = rnorm(nrow(data), mean = 50000, sd = 10000),

         parental_education = rnorm(nrow(data), mean = 12, sd = 2))

# View the first few rows of the dataset

head(data)

MODEL:

GPT-4 Turbo via Microsoft Copilot (Precise mode)

Creating scientific graphics(not in Korinek, 2023)

Economists frequently need to create graphical elements to be used in academic papers, such as model graphs, lines, curves, circles, rectangles, etc. At present, LLMs are perhaps best used for this purpose by generating LaTeX syntax to produce such graphs, although more general programming languages such as R or Python work as well.

Prompts #1 and #2 ask GPT-4 to visualize a simple supply and demand model using LaTeX and R, respectively. The LLM does a good job in both cases, relying on the pgfplots package in LaTeX and ggplot2 in R. Another example from Scott Cunningham, using the TikZ package in LaTeX, is reproduced in prompt #3.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Provide Latex code to create a graph depicting the basic elements of a supply and demand model in an economy. The graph should include a supply curve and a demand curve (straight lines) as well as the equilibrium price and quantity, with dashed vertical and horizontal lines from the point where the curves intersect. I don't want any numbers on the axes. [Comments in the response are in bold to ease reading. Only the syntax from the response is shown, along with LaTeX output.]

RESPONSE:

\documentclass{article}

\usepackage{pgfplots}

\pgfplotsset{compat=1.17}

\begin{document}

\begin{tikzpicture}

\begin{axis}[

    axis lines = left,

    xlabel = Quantity,

    ylabel = Price,

    xtick=\empty,  % No x-axis numbers

    ytick=\empty,  % No y-axis numbers

]

% Demand Curve

\addplot [

    domain=0:10, 

    samples=100, 

    color=red,

]

{10-x};

\addlegendentry{Demand}

% Supply Curve

\addplot [

    domain=0:10, 

    samples=100, 

    color=blue,

    ]

{x};

\addlegendentry{Supply}

% Dashed Lines for Equilibrium

\addplot [dashed] coordinates {(5,0) (5,5)};

\addplot [dashed] coordinates {(0,5) (5,5)};

% Labels

\node at (axis cs:5,5) [anchor=south west] {Equilibrium};

\node at (axis cs:5,0) [anchor=north] {Q*};

\node at (axis cs:0,5) [anchor=east] {P*};

\end{axis}

\end{tikzpicture}

\end{document}

LATEX OUTPUT:

MODEL:

GPT-4 via the ChatGPT Plus web interface

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

Provide R code to create a graph depicting the basic elements of a supply and demand model in an economy. The graph should include a supply curve and a demand curve (straight lines) as well as the equilibrium price and quantity, with dashed vertical and horizontal lines from the point where the curves intersect. I don't want any numbers on the axes. [Additional instructions were required to keep the text labels while removing the numeric labels from the axes. Comments in the response are in bold to ease reading. Only the syntax from the response is shown, along with LaTeX output.]

RESPONSE:

# Load the ggplot2 package

library(ggplot2)

# Define the supply and demand functions

supply <- function(q) { 0.5 * q + 3 }

demand <- function(q) { -0.5 * q + 9 }

# Generate data

q_values <- seq(0, 10, by = 0.1)

supply_data <- data.frame(q = q_values, p = sapply(q_values, supply))

demand_data <- data.frame(q = q_values, p = sapply(q_values, demand))

# Find equilibrium

equilibrium <- supply_data$p == demand_data$p

eq_price <- supply_data$p[equilibrium]

eq_quantity <- supply_data$q[equilibrium]

# Create the plot

ggplot() +

  geom_line(data = supply_data, aes(x = q, y = p), color = "blue") +

  geom_line(data = demand_data, aes(x = q, y = p), color = "red") +

  geom_vline(xintercept = eq_quantity, linetype = "dashed") +

  geom_hline(yintercept = eq_price, linetype = "dashed") +

  annotate("text", x = eq_quantity, y = max(supply_data$p), label = "Equilibrium", vjust = -0.5) +

  theme_bw() +

  theme(axis.text.x = element_blank(), axis.ticks.x = element_blank(),

        axis.text.y = element_blank(), axis.ticks.y = element_blank()) +

  xlab("Quantity") +

  ylab("Price")

R OUTPUT:

MODEL:

GPT-4 via the ChatGPT Plus web interface

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #3:

Create a directed acyclic graph DAG in Latex of an instrumental variables design. Label the IV with “Z”, the treatment “D”, the outcome “Y”, and the unobserved confounder (with dashed lines) “U”. Then have an error term “varepsilon” pointing to the outcome but from the top right direction. Have U be above D and Y in between them. Have Z be top left of D. [Comments in the response are in bold to ease reading. Only the syntax from the response is shown, along with LaTeX output.]

RESPONSE:

\documentclass{article}

\usepackage{tikz}

\begin{document}

\begin{tikzpicture}[->,>=stealth,auto,node distance=2cm, thick,main node/.style={circle,draw,font=\sffamily\Large\bfseries}]

  % Nodes

  \node[main node] (Z) {Z};

  \node[main node] (D) [below right of=Z] {D};

  \node[main node] (Y) [below right of=D] {Y};

  \node[main node, dashed] (U) [above of=D] {U};

  \node (E) [above right of=Y] {$\varepsilon$};

  % Paths

  \path[every node/.style={font=\sffamily\small}]

    (Z) edge node [left] {} (D)

    (D) edge node [right] {} (Y)

    (U) edge [dashed] node [right] {} (D)

    (U) edge [dashed] node [right] {} (Y)

    (E) edge node [right] {} (Y);

\end{tikzpicture}

\end{document}

LATEX OUTPUT:

MODEL:

GPT-4 via the ChatGPT Plus web interface

Data analysis Data analysis

Creating figures from data ◐

The Advanced Data Analysis feature, included in the 'All tools' GPT-4 model in ChatGPT Plus, is one of the most useful functionalities of LLMs for economists. It leverages GPT-4's enhanced coding abilities to produce a range of figures and graphs, running the code in the background. 

A practical illustration of this has been replicated below following Korinek (2023). I uploaded data comprising stock market prices of three major tech companies into the tool and directed GPT-4 to generate two distinct types of graphs: one illustrating the performance of these stocks along with their respective beta values (prompts #1), and another illustrating my stock portfolio given an equal initial investment (prompts #2). More general figures can be created as well.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1a:

Please display a graph that shows the performance of these securities normalized to start at 100? (Attachment: stock_data.csv)

RESPONSE:

Here is the graph showing the normalized performance of the securities (GOOG, META, MSFT, and ^GSPC), starting at a value of 100. The graph displays how each security's value has changed over time relative to its initial value. [Left out from here as the final graph is shown below.]

PROMPT #1b:

Calculate the beta for each of the stocks and add it to the legend. Change the colors to make them easier to distinguish. Use the title "One-Decade Performance of Three Big Tech Companies".

RESPONSE:

Here is the updated graph showing the one-decade performance of three big tech companies (GOOG, META, MSFT), with their respective beta values indicated in the legend. The colors have been adjusted for better distinction, and the title reflects your request. [Left out from here as the final graph is shown below.]

PROMPT #1c:

Double the size of the title, legend, axes labels and ticks. Label "^GSPC" in the legend as "S&P 500".

RESPONSE:

The graph has been updated with an enlarged title, legend, axes labels, and ticks for better visibility. The S&P 500 index is now labeled as "S&P 500" in the legend.

MODEL:

GPT-4 via the ChatGPT Plus web interface (Advanced Data Analysis function)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2a:

Assume I put $100 in each of the three stocks at the start. Can you show a bar chart that shows how much I would have in each stock at the end of the period?

RESPONSE:

Here's a bar chart showing the final value of a $100 investment in each of the three stocks (GOOG, META, MSFT) at the end of the period. This visualizes how much your investment would have grown in each stock over the time frame. [Left out from here as the final graph is shown below.]

PROMPT #2b:

Can you label each position with its current dollar value and the percentage change? Can you the bar for MSFT pop out of the chart by highlighting it?

RESPONSE:

Here's the updated bar chart showing the final value of a $100 investment in each of the three stocks at the end of the period. Each bar is labeled with its current dollar value and the percentage change. The bar for MSFT has been highlighted to stand out from the others. 

MODEL:

GPT-4 via the ChatGPT Plus web interface (Advanced Data Analysis function)

Extracting data (different in Korinek, 2023)

LLMs have a remarkable ability to pull out specific data from textual content. This is neatly demonstrated in prompt #1 below, closely following Korinek (2023), and in a more complex example in prompt #2. Beyond just text, LLMs can handle numerical data, such as extracting stock prices from news pieces or dosages from pharmaceutical databases. They are also proficient in identifying and retrieving elements, e.g. phone numbers, postal codes, dates, times, or prices. For handling vast volumes of data, this process can be efficiently scaled up using API integration. 

Relatedly, Dunn et al. (2022) explore how LLMs can be effectively employed to extract structured information from scientific texts. In economics, this technology offers potential applications like recognizing specific entities in economic history research.

LLMs are also getting better at extracting information from and interpreting figures and tables. Prompt #3 illustrates the latter, offering a correct interpretation of information in a table of results. With figures and more complex tables, the results are not consistent as of early 2024.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Mark got an A in econ and a B+ in math. Sally got an A- in both econ and math. Frank got a B in econ and a C in math. On a 4-scale from 2, A=4.0, A-=3.7, B+=3.3, B=3.0, and C=2.0. Calculate the 4-scale average from econ and math.

Then provide all information reformatted as follows, only showing this title "Name & econ grade & math grade & average" and the related information below that.

RESPONSE:

Sure, let’s calculate the 4-scale average for each student:

Now, let’s reformat the information as requested:

Name & econ grade & math grade & average

Mark & A & B+ & 3.65

Sally & A- & A- & 3.7

Frank & B & C & 2.5

I hope this helps! If you have any other questions or need further assistance, feel free to ask. 😊

MODEL:

GPT-4 Turbo via Microsoft Copilot (Precise mode)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

Extract details of all 'At a glance' indicators (both indicator and current estimate) available at: https://fred.stlouisfed.org/ List in short bullet points.

RESPONSE:

As of today, the 'At a Glance' economic indicators on the FRED website include specific values for each measure:

For the most current and detailed data on these indicators, you can visit the FRED website.

MODEL:

GPT-4 via the ChatGPT Plus web interface (incl. online search) 

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #3:

Table 6 presents impact estimates on schooling, comparing those with five and a half years of PROGRESA/Oportunidades to those never receiving benefits. They also show estimates based on the T2000 versus C2003 comparison, i.e. estimating impacts for those with nearly four years of benefits versus never receiving benefits. The table focuses on the effects on school grades completed for both boys and girls, particularly those who were younger when the program began. Discuss the results for boys aged 15-16. [Table 6 in Behrman et al., 2011.]

RESPONSE:

The results for boys aged 15-16 as shown in Table 6 are as follows:

The *** next to the numbers in the 'Impact' column indicates statistical significance at the 1% level, which means that the probability that the true impact is zero (no effect of the program) is very low. This statistical significance suggests that the PROGRESA/Oportunidades program had a robust and positive impact on the educational attainment for boys aged 15-16.

MODEL:

GPT-4 via the ChatGPT Plus web interface

Data analysis Data analysis

Reformatting data  ⚫

LLMs can convert text or numbers into various formats. Following the first example under 'Extracting Data from Text', in the two prompts below, Microsoft Copilot skillfully transforms data into a comma-separated values (CSV) format and then into a well-structured LaTeX table. This showcases the model's proficiency in handling and presenting data in multiple formats, making it a helpful tool for researchers' data formatting needs.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Reformat this output in CSV format:

Name & Econ Grade & Math Grade & Average

Mark & A (4.0) & B+ (3.3) & 3.65

Sally & A- (3.7) & A- (3.7) & 3.70

Frank & B (3.0) & C (2.0) & 2.50

RESPONSE:

Sure, here is the information reformatted in CSV format:

Name,Econ Grade,Math Grade,Average

Mark,A (4.0),B+ (3.3),3.65

Sally,A- (3.7),A- (3.7),3.70

Frank,B (3.0),C (2.0),2.50

Each field is separated by a comma (,), and each record is on a new line. This is the standard format for CSV files and should be compatible with most spreadsheet software. If you have any other questions or need further assistance, feel free to ask. 😊

MODEL:

GPT-4 Turbo via Microsoft Copilot (Precise mode)

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #2:

Convert the output into a complete LaTeX document, only providing information I can copy-paste to a LaTeX editor to obtain a neat PDF.

Name & Econ Grade & Math Grade & Average

Mark & A (4.0) & B+ (3.3) & 3.65

Sally & A- (3.7) & A- (3.7) & 3.70

Frank & B (3.0) & C (2.0) & 2.50

RESPONSE:

Sure, here is the information reformatted as a LaTeX document:

\documentclass{article}

\usepackage{tabularx}

\begin{document}

\title{Grades Report}

\author{Generated by Bing}

\date{\today}

\maketitle

\begin{table}[h]

\centering

\begin{tabular}{|c|c|c|c|}

\hline

\textbf{Name} & \textbf{Econ Grade} & \textbf{Math Grade} & \textbf{Average} \\

\hline

Mark & A (4.0) & B+ (3.3) & 3.65 \\

\hline

Sally & A- (3.7) & A- (3.7) & 3.70 \\

\hline

Frank & B (3.0) & C (2.0) & 2.50 \\

\hline

\end{tabular}

\caption{Grades and averages}

\end{table}

\end{document}

You can copy and paste this code into your LaTeX editor to generate a PDF. If you have any other questions or need further assistance, feel free to ask. 😊

LATEX OUTPUT:

MODEL:

GPT-4 Turbo via Microsoft Copilot (Precise mode)

Classifying and scoring text ◐

The advancement of LLMs has enhanced the way text is utilized in social science research, as highlighted by Gentzkow et al. (2019). These models transcend traditional statistical methods by not just analyzing text data but also comprehending the meaning within sentences.

An example of this capability is demonstrated in prompt #1 in a task adapted from Korinek (2023). GPT-4 is prompted to assess whether specific jobs listed in the O*NET database are easy or hard to automate. In the approach used, the model is first asked to justify its reasoning—akin to having a student explain their thinking process—before it classifies the job. This methodology encourages the LLM to engage in deeper analysis, leading to a more reasoned response. While the results from GPT-4 are generally reasonable, they are not completely failproof; their interpretations and judgments still have room for improvement.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Here are 5 tasks from O*Net:

1. Monitor operational records and reports to ensure accuracy and efficiency.

2. Develop and implement policies and procedures for recordkeeping.

3. Prepare and maintain employee records.

4. Maintain and update filing, inventory, mailing, and database systems.

5. Compile and prepare reports, charts, and graphs.

Provide a 5-10-word description of how easy to automate each task is and classify into "easy" or "hard" to automate. 

Provide your output in the form of a LaTeX table with the following columns:

no; task; justification; classification

RESPONSE (VERSION GENERATED IN LATEX):

MODEL:

gpt-4-0613 via OpenAI API (temperature=0)

Data analysis Data analysis

Extracting sentiment ◐

LLMs are adept at discerning sentiment within text. They can efficiently sort materials such as tweets or news articles into categories like "positive" or "negative." As demonstrated by Korinek (2023), LLMs can analyse and categorize statements from the U.S. Federal Open Market Committee (FOMC) relatively well. In a broader study, Ardekani et al. (2023) develop a model that predicts economic sentiment, analysing both U.S. economic news and the monetary policy announcements of the European Central Bank (ECB). An example of a similar application is shown in prompt #1 below.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Below are excerpts from the U.N. World Economic Situation Prospects reports from 2020 (top) and 2021 (bottom) related to international tourism in 2019 and 2020, respectively. Summarize the sentiment of both excerpts in one bullet point, and label them on a scale from 1 (very negative) to 5 (very positive), explaining your reasoning.

"Confidence in global tourism remains positive yet cautious for the remainder of 2019. Weakening economic indicators, trade tensions and Brexit-related uncertainties have started to take a toll on business and consumer confidence. The UNWTO Confidence Index points to more moderate growth in arrivals during the period September-December 2019, particularly in Europe and the Americas.  ... UNWTO estimates 3 to 4 per cent growth in international arrivals globally for 2019, reflecting rising tourism demand overall, though at a slower pace. At the regional level, prospects are strongest for Asia and the Pacific, where arrivals are expected to have grown 5 to 6 per cent"

"The tourism industry has been the hardest-hit services sector, as COVID-19 brought international travel to a virtual standstill from March 2020 onward. Before the pandemic, travel services accounted for almost one third of developing countries’ services exports. International tourist arrivals (overnight visitors) are estimated to have plunged by about 70 per cent globally in 2020 (UNWTO, 2020e), which would represent 1 billion fewer international arrivals than in 2019 and a potential loss of US$ 1.1 trillion in international tourism receipts, the largest decline ever. The shock has put 100 million to 120 million direct tourism jobs at risk, with large spill-overs into other sectors."

RESPONSE:

MODEL:

GPT-4 via the ChatGPT Plus web interface

Simulating human subjects ◐

Argyle et al. (2022) suggest using LLMs to emulate human subjects. This idea stems from the realization that LLMs' training data encompass a vast array of human-related information. They successfully condition GPT-3 with the socio-demographic profiles of real people and find that the model's responses to survey questions closely matched the actual responses of people with those backgrounds, revealing complex and detailed insights. Horton (2023) extends this concept to economics, employing simulated subjects to replicate and expand upon various behavioral studies. A representative example in prompt #1, reproduced based on Korinek (2023), highlights this approach. While there is a risk of reinforcing biases and inaccurate stereotypes with such simulations, they also hold notable potential. If applied judiciously, they can yield meaningful insights about society.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

For each of the following registered voters, provide a one-word rating to indicate whether they are more likely to approve or disapprove of a policy proposal to loosen US immigration restrictions on highly-educated workers: 

1. A 32-year old female lawyer in Houston, TX who is Hispanic 

2. A 73-year old male retired doctor in Rome, GA who is Caucasian

3. A 25-year old software engineer in San Francisco, CA who is Asian and identifies as non-binary 

4. A 45-year old female grocery check-out clerk in Brooklyn, NY who is second-generation Italian 

5. A 55-year old male insurance agent in Lake City, FL who is Caucasian

RESPONSE:

1. Approve

2. Disapprove

3. Approve

4. Disapprove

5. Disapprove

MODEL:

gpt-4-0613 via OpenAI API (temperature=0)

Math derivations

Setting up models ◐

The usefulness of LLMs in setting up economic models is exemplified in prompt #1 from Korinek (2023). GPT-4 is used to generate LaTeX code, which can be directly inserted into an editor, demonstrating its ability to quickly create common model setups like those in undergraduate problem sets. It also finds the correct first-order conditions.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1:

Set up the optimization problem of a consumer with CES preferences over two goods in LaTeX code.

RESPONSE:

Certainly! Below is the LaTeX code for setting up an optimization problem for a consumer with Constant Elasticity of Substitution (CES) preferences over two goods, say good 1 and good 2. The consumer's utility function is given by the CES form, and the consumer aims to maximize utility subject to a budget constraint. [Shown below as LaTeX output. Note that output in Korinek (2023) only includes formulas (3) and (4).]

This LaTeX document sets up the CES utility function, the budget constraint, the optimization problem, and the Lagrangian for the consumer's problem. It also includes the first-order conditions that need to be solved to find the optimal consumption bundle. To compile this LaTeX code, you will need a LaTeX distribution such as TeX Live, MiKTeX, or MacTeX, and an editor to write and compile the document.

MODEL:

gpt-4-1106-preview via OpenAI API (temperature=0)

Deriving equations ( in Korinek, 2023)

At the time of this writing, LLMs have some capacity to derive mathematical equations. As shown in a continuation of the previous example (again from Korinek, 2023), the LLM correctly derives the relationship between x1 and 2x (prompt #1) and then the demand functions (prompt #1b). 

The prompt was tested using the OpenAI web interface in January 2024. If not luck, or resulting from using the web interface instead of API, GPT-4's abstract capabilities may have improved since September 2023, when corresponding prompts produced errors in the derivations (Korinek, 2023).

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1a:

Solve the equations by deriving the demand functions, only providing code I can copy-paste to a LaTeX editor: [Code produced in the response under 'Setting up models']

RESPONSE:

To solve the optimization problem and derive the demand functions for goods $x_1$ and $x_2$, you'll need to solve the system of first-order conditions derived from the Lagrangian. Here's the LaTeX code for the process: [Shown below as LaTeX output. Note that, as opposed to output in Korinek (2023), this is correct, although it does not provide the final solution; see prompt #1b for another iteration.)

This code includes the mathematical steps to solve the system of equations for the consumer's optimization problem. It leads to the relationship between $x_1$ and $x_2$, which can then be substituted into the budget constraint to find the demand functions. The LaTeX code is ready to be copied and pasted into a LaTeX editor for further manipulation or use.

MODEL:

GPT-4 via the ChatGPT Plus web interface

PROMPT #1b (continued):

Substitute this relation into the budget constraint to solve for x∗1 and x∗2, again only showing the LaTeX code.

RESPONSE:

Here is the LaTeX code that substitutes the relationship between x1x1​ and x2x2​ into the budget constraint to solve for x1∗x1∗​ and x2∗x2∗​: [Shown below as LaTeX output; correct as opposed to Korinek, 2023.] 

This code will calculate the optimal values for x1∗x1∗​ and x2∗x2∗​ based on the given budget constraint and the derived relationship between x1x1​ and x2x2​. You can paste this code into your LaTeX editor to compile and view the results.

MODEL:

GPT-4 via the ChatGPT Plus web interface

Math derivations

Explaining models ◐

LLMs also possess the ability to explain simple models, which can be both helpful and risky for students. An example illustrated in prompt #1 from Korinek (2023) involves asking the LLM to explain a model and its solution from pasted LaTeX code. The LLM accurately identifies the model and explains the derivations correctly, as opposed to the longer response documented by Korinek (produced via the OpenAI API instead of the web interface in September 2023). There are still risks of relying on LLMs as tutors to explain models, especially lesser-known ones.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

PROMPT #1: