Tutorial: faça uma avaliação com o SDK Python

Esta página mostra como fazer uma avaliação baseada em modelos com o serviço de avaliação de IA gen usando o SDK do Vertex AI para Python.

Antes de começar

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.

    In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator (roles/resourcemanager.projectCreator), which contains the resourcemanager.projects.create permission. Learn how to grant roles.

    Go to project selector

    Verify that billing is enabled for your Google Cloud project.

    In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator (roles/resourcemanager.projectCreator), which contains the resourcemanager.projects.create permission. Learn how to grant roles.

    Go to project selector

    Verify that billing is enabled for your Google Cloud project.

  2. Instale o SDK Vertex AI para Python com a dependência do serviço de avaliação de IA gen:

    !pip install google-cloud-aiplatform[evaluation]
    
  3. Configure as suas credenciais. Se estiver a executar este início rápido no Colaboratory, execute o seguinte:

    from google.colab import auth
    auth.authenticate_user()
    

    Para outros ambientes, consulte o artigo Autenticação no Vertex AI.

    Importe bibliotecas

    Importe as suas bibliotecas e configure o projeto e a localização.

    import pandas as pd
    
    import vertexai
    from vertexai.evaluation import EvalTask, PointwiseMetric, PointwiseMetricPromptTemplate
    from google.cloud import aiplatform
    
    PROJECT_ID = "PROJECT_ID"
    LOCATION = "LOCATION"
    EXPERIMENT_NAME = "EXPERIMENT_NAME"
    
    vertexai.init(
        project=PROJECT_ID,
        location=LOCATION,
    )

    Tenha em atenção que EXPERIMENT_NAME só pode conter carateres alfanuméricos minúsculos e hífenes, até um máximo de 127 carateres.

    Configure métricas de avaliação com base nos seus critérios

    A seguinte definição de métrica avalia a qualidade do texto gerado a partir de um modelo de linguagem (conteúdo extenso) com base em dois critérios: Fluency e Entertaining. O código define uma métrica denominada custom_text_quality com base nesses dois critérios:

    custom_text_quality = PointwiseMetric(
        metric="custom_text_quality",
        metric_prompt_template=PointwiseMetricPromptTemplate(
            criteria={
                "fluency": (
                    "Sentences flow smoothly and are easy to read, avoiding awkward"
                    " phrasing or run-on sentences. Ideas and sentences connect"
                    " logically, using transitions effectively where needed."
                ),
                "entertaining": (
                    "Short, amusing text that incorporates emojis, exclamations and"
                    " questions to convey quick and spontaneous communication and"
                    " diversion."
                ),
            },
            rating_rubric={
                "1": "The response performs well on both criteria.",
                "0": "The response is somewhat aligned with both criteria",
                "-1": "The response falls short on both criteria",
            },
        ),
    )
    

    Prepare o seu conjunto de dados

    Adicione o seguinte código para preparar o conjunto de dados:

    responses = [
        # An example of good custom_text_quality
        "Life is a rollercoaster, full of ups and downs, but it's the thrill that keeps us coming back for more!",
        # An example of medium custom_text_quality
        "The weather is nice today, not too hot, not too cold.",
        # An example of poor custom_text_quality
        "The weather is, you know, whatever.",
    ]
    
    eval_dataset = pd.DataFrame({
        "response" : responses,
    })
    

    Execute a avaliação com o seu conjunto de dados

    Execute a avaliação:

    eval_task = EvalTask(
        dataset=eval_dataset,
        metrics=[custom_text_quality],
        experiment=EXPERIMENT_NAME
    )
    
    pointwise_result = eval_task.evaluate()
    
    

    Veja os resultados da avaliação de cada resposta no metrics_table Pandas DataFrame:

    pointwise_result.metrics_table
    

    Limpar

    Para evitar incorrer em cobranças na sua Google Cloud conta pelos recursos usados nesta página, siga estes passos.

    Elimine o ExperimentRun criado pela avaliação:

    aiplatform.ExperimentRun(
        run_name=pointwise_result.metadata["experiment_run"],
        experiment=pointwise_result.metadata["experiment"],
    ).delete()
    
    

    O que se segue?