• 11 posts

How ADRs solve the the last mile problem of application security

Few companies are starting to pave the way for Application and Detection Response (ADR) including Oligo Security, RevealSecurity and Miggo Security. You may find yourself quickly lost in understanding what these solutions aim to tackle. First, each of these solutions likely focuses on what they do best. For example, Oligo

LLM-based Agents

Large Language Models (LLMs) have evolved from being merely passive text generators with limited capabilities to becoming autonomous or semi-autonomous agents navigating complex environments and offering actionable insights. This transformation equips them with a diverse set of tools, perception modules to interpret signals from various modalities, and memory systems to

Recent Advances in Multimodal LLMs (MLLMs)

Multimodal Large Language Models (MLLMs) are garnering significant attention. There has been a plethora of work in recent months dedicated to the development of MLLMs [Flamingo, NExT-GPT, Gemini...]. The key challenge for MLLMs lies in effectively injecting multimodal-data in LLMs. Most research begins with pre-trained LLMs and employs modality-specific encoders

Augmenting LLMs Lenses

Large Language Models (LLMs) are developed to understand the probability distribution that governs the world language space. Autoregressive models approximate this distribution by predicting subsequent words based on previous context, forming a Markov chain. World knowledge (often referred as parametric knowledge) is stored implicitly within the model's parameters.

Privacy Concerns and Potential Attacks in LLMs

Large Language Models (LLMs), exemplified by OpenAI’s GPT-4 and Meta’s LLaMA, continue to impress us with their capabilities, which have surpassed expectations from just a few years ago. Recently, the research community has shifted its focus towards the optimal and efficient usage of resources. Concepts like the Mixture

Parameter-Efficient Fine-Tuning (PEFT), LoRA and Quantization

Transformer-based deep learning models, such as GPT-3 and LLaMA, have achieved state-of-the-art results on many NLP tasks. These models have exhibited outstanding performance and are capable of resolving tasks on the fly through in-context-learning (ICL) without the need for retraining. This approach helps to avoid the well-known catastrophic forgetting problem.

Large Models Training

The urge to train expansive deep learning models, particularly large language models, is ever-growing. A single GPU often falls short in providing the required memory capacity to accommodate various parameters and data, thus necessitating the employment of multiple GPUs. Additionally, the time cost of training complex models can be daunting.

Mixture Of Experts (MoE) & LLMs

Scaling up the size of models leads to a considerable augmentation in computational expenses, both during training and inference phases. In a bid to harness the benefits of parameter scaling without an equivalent surge in computational requirements, the Mixture of Experts (MoE) approach was developed for expansive language models. Within

Evaluation of Large Language Models (LLMs)

Large language models (LLMs) have shown tremendous capabilities, ranging from text summarization and classification to more complex tasks like code generation. However, there is still an urgent need to understand how we can holistically evaluate properly trained models. Traditional benchmarks tend to fall short, as LLMs are capable of handling