In all queries, the estimates above were produced using default values in the tool, but in real research situations, users can interact in many ways with the tool to obtain greater value from it. Some suggestions include:
Uploading Materials: Enhance analyses by uploading scientific papers, reports, and charts. When doing this, it is worth using the prompt recommended here .
Specific Scenarios: If you have a specific scenario to analyze, provide details at the start. For example, include relevant details about the target animals (which may influence the intensity or temporal evolution of the ‘pain’ experience, such as age, sex, living conditions, comorbidities, breed, among others), and the interval of analysis you are interested on (e.g., from the moment of injury until pain is gone)
Sleeping time: if you would like to discount pain during sleeping hours, instruct the GPT tool to do it in the initial instructions. For example, by stating: “Consider the day has only 16 hours”.
Interactive Engagement: At any stage, interact with the tool by asking for more details, challenging the provided information, contributing knowledge, or discussing points of special interest.
Focused Conversations: Treat each conversation as a valuable resource. Keep discussions on-topic to maintain the quality of interactions.
Approving External Functions: At some point, you will be asked to approve the running of external functions. It is important to approve these requests as some operations are performed using external Python codes, enhancing the tool’s functionality and accuracy.
From the moment the tool has started to perform satisfactorily, our priority has been to make it available to the research and advocacy community, even if they are not yet perfect. We found it particularly useful for didactic purposes, incorporating it into our workshops as a way to engage participants in the understanding and use of the Welfare Footprint method. Participants are motivated by being able to immediately describe and quantify sources of suffering in their target species or even for their personal pain experiences.
Existing limitations with the tool are primarily due to the limited capabilities of current chat boxes. Instructions, in the case of Custom GPTs, are limited to 8,000 characters (although we outsource parts of the analytical process by adding functions and uploading content to the knowledge base), preventing the tool from taking full advantage of existing capabilities of LLMs. Still, in all cases, it is important to take the following points into account:
LLM Variability: Large Language Models are not deterministic, so estimates are likely to vary with each query. This unpredictability offers diverse perspectives, similar to consulting different experts. If you are going to use the estimates produced by the Pain-Track GPT tool to inform decision-making, we suggest using the average of about 10 queries for the same source of ‘pain’, in the same way that estimates conducted by humans typically reflect the consensus of various researchers (as we did with the example above).
Human Expertise: AI-generated analyses are so far only valuable starting points, but human expertise and critical evaluation are essential to refine and build upon these initial drafts.
Policy Compliance: Avoid consulting the tool about topics related to torture, extreme suffering, etc., as these are against OpenAI’s policies. If you need to address such topics for scientific purposes, to understand and mitigate hotspots of suffering, explain this in the chatbox at the start of the interaction to reduce the chance of a negative response from the system.
Technical Issues: If you encounter issues like improperly formed tables, frozen stages (particularly when justification for estimates of intensity are being generated) or error codes, use the ‘Regenerate’ and ‘Bad response’ icons. This feedback helps improve the GPT model. Persistent issues can be reported to us at AI@welfarefootprint.org.
If you think this tool can be useful for others, feel free to share this document, rather than sharing the GPT link. This ensures users have the necessary background and access to the newest information, as we will continue to update this document with important developments, including potential transitions to other LLMs or the availability of new tools.