
CLEVER: A Curated Benchmark for Formally Verified Code Generation
Jul 8, 2025 · TL;DR: We introduce CLEVER, a hand-curated benchmark for verified code generation in Lean. It requires full formal specs and proofs. No few-shot method solves all …
We introduce CLEVER, the first curated benchmark for evaluating the generation of specifications and formally verified code in Lean. The benchmark comprises of 161 programming problems; …
Evaluating the Robustness of Neural Networks: An Extreme Value...
Feb 15, 2018 · Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness. The proposed CLEVER score is …
579 In this paper, we have proposed a novel counter- factual framework CLEVER for debiasing fact- checking models. Unlike existing works, CLEVER is augmentation-free and mitigates …
Contrastive Learning Via Equivariant Representation - OpenReview
Sep 26, 2024 · In this paper, we revisit the roles of augmentation strategies and equivariance in improving CL's efficacy. We propose CLeVER (Contrastive Learning Via Equivariant …
STAIR: Improving Safety Alignment with Introspective Reasoning
May 1, 2025 · One common approach is training models to refuse unsafe queries, but this strategy can be vulnerable to clever prompts, often referred to as jailbreak attacks, which can …
Submissions | OpenReview
Jan 22, 2025 · Leaving the barn door open for Clever Hans: Simple features predict LLM benchmark answers Lorenzo Pacchiardi, Marko Tesic, Lucy G Cheke, Jose Hernandez-Orallo …
Weakly-Supervised Affordance Grounding Guided by Part-Level...
Jan 22, 2025 · In this work, we focus on the task of weakly supervised affordance grounding, where a model is trained to identify affordance regions on objects using human-object …
While, as we mentioned earlier, there can be thorny “clever hans” issues about humans prompting LLMs, an automated verifier mechanically backprompting the LLM doesn’t suffer from these. …
Dual-Model Defense: Safeguarding Diffusion Models from …
Sep 27, 2024 · Abstract: Diffusion models have demonstrated remarkable capabilities in image synthesis, but their recently proven vulnerability to Membership Inference Attacks (MIAs) …