tips for using the pain-track custom gpt
The tool’s output is organized into typical sections of a scientific document (introduction, methods, results, discussion) through a series of consecutive interactions, which are either confirmed or refined by the user. Once you enter the pain source and the species (including humans) to be described and quantified, you can proceed by simply answering ‘yes’ or ‘proceed’. However, in real research situations, users can interact with the tool in various ways to derive greater value from it. Some suggestions include:
Providing detailed scenarios at the start: If you have a specific scenario to analyze, provide details at the start. For example, include relevant details about the target animals (which may influence the intensity or temporal evolution of the ‘pain’ experience, such as age, sex, living conditions, comorbidities, breed, among others), and the interval of analysis you are interested on (e.g., from the moment of injury until pain is gone)
Approving External Functions: At some point, you will be asked to approve the running of external functions. It is important to approve these requests as some operations are performed using external Python codes, enhancing the tool’s functionality and accuracy.
Uploading Materials: Enhance analyses by uploading scientific papers, reports, and charts. When doing this, it is worth using the prompt recommended here .
Taking Sleeping Time into Account: if you would like to discount pain during sleeping hours, instruct the GPT tool to do it in the initial instructions. For example, by stating: “Consider the day has only16 hours”.
Interactive Engagement: At any stage, interact with the tool by asking for more details, challenging the provided information, contributing knowledge, or discussing points of special interest.
Keep Conversations Focused: Treat each conversation as a valuable resource. Keep discussions on-topic to maintain the quality of interactions.
Allowing for AI Variability: Large Language Models are not deterministic, so estimates may vary with each query. This unpredictability offers diverse perspectives, similar to consulting different experts. If you are using the estimates produced by the Pain-Track GPT tool to inform decision-making, we suggest using the average of about 10 queries for the same source of ‘pain’, similar to how human estimates typically reflect the consensus of various researchers.
Human Expertise is Pivotal: AI-generated analyses are so far only valuable starting points, but human expertise and critical evaluation are essential to refine and build upon these initial drafts.
Policy Compliance: Avoid consulting the tool about topics related to torture, extreme suffering, etc., as these are against OpenAI’s policies. If you need to address such topics for scientific purposes, to understand and mitigate hotspots of suffering, explain this in the chatbox at the start of the interaction to reduce the chance of a negative response from the system.
Dealing with Technical Issues: If you encounter issues like improperly formed tables, frozen stages (particularly when justification for estimates of intensity are being generated) or error codes, use the ‘Regenerate’ and ‘Bad response’ icons. This feedback helps improve the GPT model. Persistent issues can be reported to us at AI@welfarefootprint.org.
Sharing the Tool: If you think this tool can be useful for others, feel free to share this document, rather than sharing the GPT link. This ensures users have the necessary background and access to the newest information, as we will continue to update this document with important developments, including potential transitions to other LLMs or the availability of new tools.