Course3 assignment3 - human task - taskObject

the task ui does not show the text to be reviewed. it displays an error, and ‘taskObject’

from the template:

{{ task.input.taskObject }}

further review reveals that ‘task’ is not defined. where should that be defined?

start human loop looks correct:
Complete the dictionary input_content, which should contain the original prediction ('initialValue' key) and review text ('taskObject' key).

import json

human_loops_started =

CONFIDENCE_SCORE_THRESHOLD = 0.90

for review in reviews:
inputs = [
{“features”: [review]},
]

response = predictor.predict(inputs)
print(response)
prediction = response[0]['predicted_label']
confidence_score = response[0]['probability']

print('Checking prediction confidence {} for sample review: "{}"'.format(confidence_score, review))

# condition for when you want to engage a human for review
if confidence_score < CONFIDENCE_SCORE_THRESHOLD:
    human_loop_name = str(time.time()).replace('.', '-') # using milliseconds
    input_content = {
        ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
        "initialValue": prediction, # Replace None
        "taskObject": reviews # Replace None
        ### END SOLUTION - DO NOT delete this comment for grading purposes
    }
    start_loop_response = a2i.start_human_loop(
        HumanLoopName=human_loop_name,
        FlowDefinitionArn=augmented_ai_flow_definition_arn,
        HumanLoopInput={"InputContent": json.dumps(input_content)},
    )

    human_loops_started.append(human_loop_name)

    print(
        f"Confidence score of {confidence_score * 100}% for prediction of {prediction} is less than the threshold of {CONFIDENCE_SCORE_THRESHOLD * 100}%"
    )
    print(f"*** ==> Starting human loop with name: {human_loop_name}  \n")
else:
    print(
        f"Confidence score of {confidence_score * 100}% for star rating of {prediction} is above threshold of {CONFIDENCE_SCORE_THRESHOLD * 100}%"
    )
    print("Human loop not needed. \n")

Welcome to the community @tbucci1 :grinning:
I think the error is caused by the values you entered for “initialValue” and “taskObject”. You should enter two strings.

Yes, I did this before just in case the variables were incorrectly assigned and the same thing happened.

I think the issue is where task is created, if you are unable to call out where and how it should be created I will take another look. thanks.

ran again today with the same code and the text ‘taskObject’ no longer appears in the human task UI - now, nothing at all appears. something has changed.

I checked the template against the other sample templates in the git, it looks the same. guess I can try in a different browser, and/or hope that this gets some more attention?

The output of the cell completing the input dictionary looks fine, there doesn’t seem to be anything wrong with that code:
[{‘probability’: 0.9376369118690491, ‘predicted_label’: 1}]
Checking prediction confidence 0.9376369118690491 for sample review: “I enjoy this product”
Confidence score of 93.76369118690491% for star rating of 1 is above threshold of 90.0%
Human loop not needed.

[{‘probability’: 0.6340296864509583, ‘predicted_label’: -1}]
Checking prediction confidence 0.6340296864509583 for sample review: “I am unhappy with this product”
Confidence score of 63.402968645095825% for prediction of -1 is less than the threshold of 90.0%
*** ==> Starting human loop with name: 1652795924-7067013

[{‘probability’: 0.5422114729881287, ‘predicted_label’: 1}]
Checking prediction confidence 0.5422114729881287 for sample review: “It is okay”
Confidence score of 54.221147298812866% for prediction of 1 is less than the threshold of 90.0%
*** ==> Starting human loop with name: 1652795925-2174025

[{‘probability’: 0.3931102454662323, ‘predicted_label’: 1}]
Checking prediction confidence 0.3931102454662323 for sample review: “sometimes it works”
Confidence score of 39.31102454662323% for prediction of 1 is less than the threshold of 90.0%
*** ==> Starting human loop with name: 1652795925-813733

trying again today, same code, results in yet a new result:

today it lists the reviews - all reviews, notably, even the one with the high confidence “Human loop not needed.”

it is a different result in the same section each time. how could that be? can someone please look at this. I appreciate any ideas about what could be happening, or what else I could review to troubleshoot.

this is from browser, what should script src be?

window.onload = function() {
  WorkerAppData.initializeWorkerAppData({
    userName: 'user-1652884040',
    appClientId: '1rkt5cld87qe8ovg8vbv9remes',
    cognitoUserPoolDomain: 'groundtruth-user-pool-domain-1652884040.auth.us-east-1.amazoncognito.com',
    metricsEndpoint: 'https://v1ip4qhb9j.execute-api.us-east-1.amazonaws.com/Prod/events',
    stage: 'prod',
    region: 'us-east-1',
    logoutEndpoint: 'https://groundtruth-user-pool-domain-1652884040.auth.us-east-1.amazoncognito.com/logout?client_id=1rkt5cld87qe8ovg8vbv9remes&logout_uri=https://4pammbjrcr.labeling.us-east-1.sagemaker.aws/logout',
    workerPortalTaskIframeUrl: 'https://mturk-console-template-preview-hooks.s3.amazonaws.com/previewUITemplateIFrameContent.html',
    beaconUIMode: 'BeaconUIDisabled',
  });
}

When I go into sagemaker I notice a pop up message about clearing the workspace, but it disappears before any selection can be made.

Clearly something is not working correctly.

I had the same experience today - all 4 reviews. I was able to select a label and submit. Doing this 3 times completes the task. From there I am able to complete the remainder of the notebook and submit the assignment with passing grade. Thank you.

Here’s the problem:

The “taskObject” should be just 1 review, not the reviews which is a list. Here is how the loop starts:

for review in reviews:

so that, the “taskObject” should be review, not reviews.

This was almost a year ago and I found a workaround at that time - if you think you found the root cause, that’s great - I suggest you mention it to the instructor so they can change the code.