Nope, Documentation + LLM misses the big chunk what we call human errors, commonly x y problems, LLM are good until you need to get deep then they start giving surface level answer it’s possible to point them toward the right direction by refining but at that point i would prefer reading Documentation.
Nope, Documentation + LLM misses the big chunk what we call human errors, commonly x y problems, LLM are good until you need to get deep then they start giving surface level answer it’s possible to point them toward the right direction by refining but at that point i would prefer reading Documentation.