Weekend Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code = simple70

Pass the Oracle Cloud Infrastructure 1z0-1127-25 Questions and answers with ExamsMirror

Practice at least 50% of the questions to maximize your chances of passing.
Exam 1z0-1127-25 Premium Access

View all detail and faqs for the 1z0-1127-25 exam


473 Students Passed

95% Average Score

96% Same Questions
Viewing page 1 out of 3 pages
Viewing questions 1-10 out of questions
Questions # 1:

When does a chain typically interact with memory in a run within the LangChain framework?

Options:

A.

Only after the output has been generated

B.

Before user input and after chain execution

C.

After user input but before chain execution, and again after core logic but before output

D.

Continuously throughout the entire chain execution process

Questions # 2:

How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?

Options:

A.

Increasing temperature removes the impact of the most likely word.

B.

Decreasing temperature broadens the distribution, making less likely words more probable.

C.

Increasing temperature flattens the distribution, allowing for more varied word choices.

D.

Temperature has no effect on the probability distribution; it only changes the speed of decoding.

Questions # 3:

What is the role of temperature in the decoding process of a Large Language Model (LLM)?

Options:

A.

To increase the accuracy of the most likely word in the vocabulary

B.

To determine the number of words to generate in a single decoding step

C.

To decide to which part of speech the next word should belong

D.

To adjust the sharpness of probability distribution over vocabulary when selecting the next word

Questions # 4:

What does accuracy measure in the context of fine-tuning results for a generative model?

Options:

A.

The number of predictions a model makes, regardless of whether they are correct or incorrect

B.

The proportion of incorrect predictions made by the model during an evaluation

C.

How many predictions the model made correctly out of all the predictions in an evaluation

D.

The depth of the neural network layers used in the model

Questions # 5:

Which statement is true about the "Top p" parameter of the OCI Generative AI Generation models?

Options:

A.

"Top p" selects tokens from the "Top k" tokens sorted by probability.

B.

"Top p" assigns penalties to frequently occurring tokens.

C.

"Top p" limits token selection based on the sum of their probabilities.

D.

"Top p" determines the maximum number of tokens per response.

Questions # 6:

How does a presence penalty function in language model generation when using OCI Generative AI service?

Options:

A.

It penalizes all tokens equally, regardless of how often they have appeared.

B.

It only penalizes tokens that have never appeared in the text before.

C.

It applies a penalty only if the token has appeared more than twice.

D.

It penalizes a token each time it appears after the first occurrence.

Questions # 7:

What do embeddings in Large Language Models (LLMs) represent?

Options:

A.

The color and size of the font in textual data

B.

The frequency of each word or pixel in the data

C.

The semantic content of data in high-dimensional vectors

D.

The grammatical structure of sentences in the data

Questions # 8:

An AI development company is working on an AI-assisted chatbot for a customer, which happens to be an online retail company. The goal is to create an assistant that can best answer queries regarding the company policies as well as retain the chat history throughout a session. Considering the capabilities, which type of model would be the best?

Options:

A.

A keyword search-based AI that responds based on specific keywords identified in customer queries.

B.

An LLM enhanced with Retrieval-Augmented Generation (RAG) for dynamic information retrieval and response generation.

C.

An LLM dedicated to generating text responses without external data integration.

D.

A pre-trained LLM model from Cohere or OpenAI.

Questions # 9:

When does a chain typically interact with memory in a run within the LangChain framework?

Options:

A.

Only after the output has been generated.

B.

Before user input and after chain execution.

C.

After user input but before chain execution, and again after core logic but before output.

D.

Continuously throughout the entire chain execution process.

Questions # 10:

Which is a distinguishing feature of "Parameter-Efficient Fine-Tuning (PEFT)" as opposed to classic "Fine-tuning" in Large Language Model training?

Options:

A.

PEFT involves only a few or new parameters and uses labeled, task-specific data.

B.

PEFT modifies all parameters and is typically used when no training data exists.

C.

PEFT does not modify any parameters but uses soft prompting with unlabeled data.

D.

PEFT modifies all parameters and uses unlabeled, task-agnostic data.

Viewing page 1 out of 3 pages
Viewing questions 1-10 out of questions
TOP CODES

TOP CODES

Top selling exam codes in the certification world, popular, in demand and updated to help you pass on the first try.