I am refering to the short note, that is enclosed in the notebook. Thus,
NOTE: With regards to the output is parseable JSON. LLMs don’t always do this successfully.
I’ve found a challenge to figure out the reason why LLMs don’t always do this successfully and I am glad to share my insight to everybody who might be interested in this.
Shortly, in order to help understand why, I displayed the raw output generated by the LLM for the first task (Analyze Code Quality), using the following snippet code:
# Get the raw output of the first task (Analyze Code Quality)
raw_output_task_0 = result.tasks_output[0].raw
print("--- Raw Output of Analyze Code Quality Task ---")
print(raw_output_task_0)
the output is shown in the following screenshot :
After that, I’ve inspected the output for any deviations from standard JSON format, such as extra text before or after the JSON, incorrect delimiters, or unescaped characters.
To sum up, I’ve extracted the JSON string by removing the markdown code block delimiters, running the following snippet code:
import json
# Get the raw output of the first task (Analyze Code Quality)
raw_output_task_0 = result.tasks_output[0].raw
# Extract the JSON string by removing the markdown code block delimiters
json_string = raw_output_task_0.strip().replace('```json', '').replace('```', '')
try:
# Parse the cleaned string as a JSON dictionary
parsed_output = json.loads(json_string)
print("✅ Can be parsed as JSON dictionary")
print(f"Keys: {list(parsed_output.keys())}")
except json.JSONDecodeError as e:
print(f"❌ Cannot parse as JSON: {e}")
print("Raw string content for debugging:")
print(json_string)
doing this change, it is possible to get the expected output:
Expected output:
✅ Can be parsed as JSON dictionary
Keys: ['critical_issues', 'minor_issues', 'reasoning']
The same should be done for the second task (Review Security)…
