In this RQ, we investigate the generalizability of our imitation attack. As discussed in the paper, we use two APIs here. (text-davinci-003 and gpt-3.5-turbo) Note that we use version '0301' for this study.
Readers can refer to OpenAI's website for more information about the different models. https://platform.openai.com/docs/models/gpt-3-5 (Note that you need to be in the country/area where OpenAI's services are available. )
Our imitation attack method exhibits encouraging generalizability and can adapt to different LLM APIs without extra adaptiveness. This illustrates potentially more severe threats in reality, as attackers in reality may smoothly transfer the attack to other LLM APIs with lower cost.