# Example of LLM inference using FlashAttention