AI Model

AI Model with ChatGPT✨

You can select the GPT model best suited to answer questions based on specific needs and preferences.

A few differences between the GPT-3.5 and GPT-4 models:

GPT-3.5-turbo-1106

This model is designed to deliver rapid and cost-effective solutions without compromising on quality. Ideal for high-demand applications where every millisecond counts.

GPT-4-1106-preview

This advanced preview version showcases significant technological advancements and algorithmic refinements, setting a new standard for AI performance and precision.

GPT-4o

GPT-4o, the pinnacle of AI evolution. Building on the robust foundation of GPT-4, this updated version brings unparalleled accuracy, enhanced performance, and sophisticated contextual understanding.

Balance option:

You have the option to specify the creativity level for the model. If you want a simple text search, move the slider to
“Classical text search.” For a more complex search with language meanings and word forms, choose “Search by context.” If you are unsure, set the slider to “Optimal.”

Example question:

  • “How many people live on Mars now?”

Example answer:

  • “At present, humans do not reside on Mars. However, numerous space agencies and private companies are actively working on missions and projects that could lead to future human habitation. Advanced robotic missions have been deployed to assess the planet’s resources, climate, and potential landing sites. Innovative ideas, such as 3D-printed habitats, underground colonies, and terraforming technologies, are being explored to make Mars more habitable. The dream of living on Mars is fueling creative solutions and pushing the boundaries of human exploration.”

Neutral +1 is the the most optimal state.

For a search experience that mirrors natural language processing, considering synonyms, context, and related concepts, adjust the slider closer to “Semantic vector search.💡


For a more straightforward search relying solely on the words in the provided data, adjust the slider closer to “Classic text search.”🤖


Personality:

“prompt”

Tone: responds to input with a unique tone that can be friendly, professional, humorous, or serious, depending on the context and user’s mood.

Role: can take on different roles in conversation, such as an expert, assistant, friend, or consultant, depending on the task or situation.

Formatting: considers stylistic aspects and text formatting, maintaining consistency and appropriate style in its responses.


Advance settings :

For additional settings click the “Advanced Settings” button. Here, you can adjust parameters such as Custom Prompt, Precontext, Diversity Range, and Degree of Originality.

Degree of Originality

Adjust the slider to the desired position. ‘The default position is optimal.’

Diversity Range

Adjust the slider to the desired position. ‘The default position is optimal.’


Amount of the last messages to be saved in context


Custom Prompt:

Instead of relying on standard queries, you can create your own, more specific queries that better suit your particular task or situation. This approach allows for better control over interaction with the model and obtaining more precise and relevant answers. A Custom Prompt includes a combination of text and special tags that instruct the model on the desired context or task.

Examples:

Most likely’ before answering. Please take into attention the context: {context}

Answer the question: {question}

{currentDateTime}

Always check {currentDateTime} on the background and use this information in providing answers.

When a user asks about upcoming events, check the {currentDateTime}.

Use the {currentDateTime} with the event list to identify the next upcoming event and provide its date and time.

A “Custom prompt” refers to a personalized input invitation, replacing the standard one. If there’s information to convey,  Tone and Role formatting are disabled, ensuring the priority of the custom prompt. Consequently, if a “custom prompt” is present, it overrides the standard “prompt,” ensuring its exclusive use.


Knowledge Sources:

Knowledge Sources determine the maximum number of documents or chunks of text used to generate the answer. It allows controlling the volume of information from which the model extracts context and details for formulating a response. By adjusting this parameter, one can balance between depth of analysis and response speed.

The more Knowledge Sources specified the more information the model has for analysis and utilization in crafting a response. However, this can also impact query processing time. Conversely, if too few Knowledge Sources are specified, the model may not have enough context to generate a comprehensive and accurate answer.

Fine-tuning Knowledge Sources enables more precise control over which information the model will use to respond to queries, which is especially useful if there’s a need to limit the model’s access to information due to resource constraints or specific project requirements.

Was this article helpful to you? Yes No

How can we help?