long-context1 Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention 논문 리뷰 오늘 소개해드릴 논문은 Long-Context에서 효과적인 방법을 위한 새로운 메커니즘 infini-attention에 관한 내용입니다. https://arxiv.org/abs/2404.07143 Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key component in our proposed approach .. 2024. 4. 16. 이전 1 다음