OpenClaw Codex Harness Launch Kit: Subscription Auth, Runtime Setup, Tool Search, and Migration Checklist
This item includes essential tools and setup for the OpenClaw Codex Harness, covering runtime configuration, tool discovery, and m…
Academic Research is an OpenClaw community skill that wraps OpenAlex lookups into agent-friendly research tasks: topic search, author search, DOI lookup, citation-chain exploration, open-access URL discovery, and a lightweight literature-review workflow. The source evidence says it uses OpenAlex without an API key and includes Python scripts for search and review generation.
Use this as a candidate for researchers, students, content teams, and agent builders who need fast paper triage before a deeper manual review. It is especially useful when the job is discovery and metadata synthesis rather than guaranteed full-text extraction or peer-reviewed conclusions.
The visible setup surface is small: a SKILL.md plus Python scripts that call public scholarly APIs. Source files reviewed from ClawHub show network calls to OpenAlex and Unpaywall, a /tmp cache for literature-review results, and optional markdown/JSON output. Pricing is classified as free because the ClawHub/source text states OpenAlex usage needs no API key and the page lists an MIT-0 license; no paid gate was visible in the fetched evidence.
Before anyone installs or uses it, Runner AI Review should produce artifacts for: static scan of SKILL.md and all bundled scripts; dependency/install review, including Python package imports and whether requests is assumed or bundled; prompt-injection and tool-poisoning review of the skill text and generated outputs; sandbox execution against harmless OpenAlex queries with network egress restricted to expected domains; screenshot or video capture of representative command output; and a residual-risk note covering API data quality, cached files in /tmp, outbound scholarly API calls, and citation-synthesis hallucination risk.
This has not been tested, verified safe, or marked production-ready by LinkLoot Runner artifacts yet. The main visible risks are outbound network access, third-party scholarly data reliability, local cache writes under /tmp, and the temptation to treat generated literature reviews as authoritative. The skill should be reviewed as untrusted code and run only in a sandbox until Runner evidence exists.
Sign in to join the discussion and vote on comments.
Sign in