Below is a breakdown of ~20 major topics, each of which can have multiple sub-points (so you can think of them as “points to master”). In your learning, translate each into possible exam questions or practical applications.
Definition: what makes AI “generative” (versus discriminative models)
Types of generative models (e.g. autoregressive, diffusion, GANs, variational autoencoders)
Strengths and limitations (e.g. creativity vs hallucination)
Difference between generative and traditional AI (pattern recognition, predictive modeling)
Transformer architecture basics (attention mechanism, layers)
Prompting methods, few-shot, zero-shot, chain-of-thought
Fine-tuning vs prompt engineering vs adapter approaches
Model inference, latency, scaling, cost tradeoffs
Matching business problems to GenAI possibilities
Criteria for selecting high-impact use cases (value, feasibility, risk)
Use case templates and value-impact frameworks
Prioritization techniques (ROI, cost/benefit, risk vs reward)
Designing end-to-end solution flow (input, model, post-processing, feedback loop)
Integrations with existing systems
Data pipelines, preprocessing, data governance
Human-in-the-loop design and fallback strategies
Bias, fairness, transparency, explainability
Accountability and decision traceability
Privacy, data protection, handling sensitive data
Regulatory compliance and legal constraints
Ethical frameworks for deployment
Organizational readiness, stakeholder alignment
Communication plans, training, upskilling
Cultural shifts: human+AI collaboration mindset
Metrics for adoption, feedback loops, continuous improvement
Quantitative KPIs (productivity gains, cost savings, throughput)
Qualitative KPIs (user satisfaction, quality, innovation)
Baseline measurement, uplift measurement, attribution
TCO (total cost of ownership) and payback period
Model drift detection, performance monitoring
Fail-safe mechanisms (fallback, human review)
Logging, auditing, version control
Continuous retraining, feedback loops
Evaluating model providers (OpenAI, Anthropic, open models)
Alignment with cloud environments, infrastructure constraints
Cost models (token pricing, compute, storage)
Partner ecosystems, open-source tradeoffs
Ownership of generated content
Licenses and usage rights
Handling third-party content, copyrighted data
Open vs proprietary models
Prompt structure, context, priming
Conditioning and prompt “locks”
Ensuring consistency, reducing hallucinations
Prompt tuning strategies
Metrics: accuracy, relevance, fluency, diversity
Human evaluation vs automated metrics
Adversarial testing, stress testing
Edge cases, robustness, error handling
Translating business needs into technical specs
Working with data scientists, ML engineers
Agile / iterative delivery in AI projects
Documentation and handover
Deployment strategies (batch, real-time)
Infrastructure (GPU, CPU, memory, caching)
Cost optimization (pruning, quantization, model distillation)
Autoscaling and capacity planning
Versioning, rollback strategies
Experiment tracking, model registry
Retraining triggers, schedule vs event-driven
Decommissioning, sunset of models
Adapting use cases to domain (finance, healthcare, legal, etc.)
Domain-specific constraints (regulation, data sensitivity)
Leveraging domain knowledge (ontologies, knowledge graphs)
Hybrid approaches (combining generative with deterministic systems)
Adversarial attacks, prompt injection
Input sanitization, output filtering
Secure inference, API hardening
Ensuring integrity of model and data
Techniques to explain model decisions
Proxy models, attention heatmaps, rationale generation
Transparency to stakeholders, trust building
How AI changes consulting engagement models
Embedded AI advisory vs traditional models
Accelerators, toolkits, reusable assets
Shifts in skillsets and roles
Agentic AI, autonomous agents
Multimodal models (text + vision + audio)
Foundation model fine-tuning vs retrieval-augmented generation
AI regulation, certification, standards