hackquest logo

KIYANLEND

Private Lending infra Powered by Encrypted AI

視頻

技術堆疊

React
Next
Solidity

描述



## **Inspiration**

 We were inspired by a simple but recurring pain: how hard it is to run lending workflows, especially in private credit, without turning borrower documents into a privacy liability. Today, most lending stacks involve copying sensitive PDFs and financial docs across inboxes, vendor tools, and data rooms. With AI entering the mix, the risk compounds. Kiyan was born from the idea that **borrowers should own their data**, and lenders should only see **what they’re meant to**, scores, flags, repayment terms, not the raw source.

---

## **What it does**

 Kiyan is a **confidential lending infrastructure layer** that lets platforms or institutions run private credit workflows (underwriting, pay/repay, servicing) without ever needing to hold or expose raw borrower documents.

 * **Kiyan Vault** protects borrower data at rest using privacy-preserving storage.
 * **Kiyan AI** converts sensitive inputs into structured outputs (like Trust Scores or repayment terms), without leaking raw documents to LLMs.
 * **Kiyan Contracts** handle payment, repayment, and reconciliation logic, with full audit trails.

 The borrower can always access their full documents; lenders can only access **approved outputs** with audit logs, not the raw inputs.

---

## How we built it

We built Kiyan around a strict rule: **raw borrower documents must never become platform-readable data by default**, but borrowers must still be able to access their documents at any time. To make that possible, we split the system into three layers that work together:

### 1) Kiyan Vault (private storage + permissioned retrieval)

Kiyan Vault is our confidentiality layer for borrower documents and sensitive underwriting inputs.

* **Client-first protection:** when a borrower uploads a document, we protect it *before* it becomes “platform data.” Instead of storing plaintext in our normal database, we encrypt and split the sensitive content into protected fragments.
* **Distributed storage of fragments:** these fragments are stored across multiple storage nodes so **no single node can reconstruct the document** by itself. This removes the “single database breach = full compromise” failure mode.
* **Borrower-owned retrieval:** when the borrower wants to view or export their documents, the system retrieves enough fragments, recombines them, and decrypts them for the borrower. This is why the borrower can always access the full document, even though the platform doesn’t have a readable copy by default.
* **Policy & audit wrapper:** access is mediated through explicit permissions. We treat “who accessed what and why” as a first-class product requirement, not a logging afterthought.

What we actually implemented in the demo:

* a borrower upload flow that stores protected document data in Kiyan Vault rather than in the standard app DB,
* an access/permission flow where the borrower is the owner and controls what can be derived or shared,
* an output channel that only returns **approved fields** (scores/flags/terms) to the application layer.

### 2) Kiyan AI (controlled outputs, not “AI reads PDFs”)

Kiyan AI is not “throw documents at an LLM.” It’s a controlled analysis pipeline with hard boundaries.

We built Kiyan AI with two operating modes (so we can support different partner constraints):

**Mode A, Output-only AI (LLM never receives raw documents)**

* We run deterministic or tightly-scoped analysis over vault-protected inputs to produce **structured underwriting outputs**: Trust Score, risk flags, affordability bands, recommended limits/terms.
* The application and any LLMs only receive these **minimal outputs**, never the documents.

**Mode B, Confidential AI (if document summarization is required)**

* For workflows that require text extraction or summarization, AI runs inside a **confidential execution boundary**. The model can process the sensitive text, but the surrounding infrastructure is designed so operators and systems outside that boundary can’t inspect the raw input or internal memory.
* Even in this mode, we still enforce **schema-gated outputs**: only pre-approved fields leave the confidential boundary.

In both modes we enforce:

* **Schema gating:** only a predefined JSON schema can pass forward (prevents accidental leakage).
* **Network isolation:** AI components cannot pull from Vault directly unless explicitly authorized.
* **No raw document forwarding:** the “default path” is always output-only.

### 3) Kiyan Contracts (pay/repay workflow spine)

To prove this can operate like a real lending product, we built and deployed the lifecycle spine:

* **Smart contracts representing loan states** (funded, active, repaid, delinquent markers where applicable).
* **Pay & repay functions** that record events deterministically.
* **Event logs as the audit trail** for lifecycle changes (payment posted, repayment received, balance updated).
* This backbone is what we used in demos to show how the platform behaves end-to-end, even before we build the full servicing suite.


---

## Challenges we ran into (building something like this)

The hardest part wasn’t writing contracts, it was building a system where privacy is real and enforceable, not a marketing claim.

### 1) Designing “borrower can retrieve everything, platform can’t”

This sounds simple but is tricky in practice. Most apps assume the server can always read uploaded files. We had to redesign the flow so that:

* documents are protected before becoming “platform data,”
* the borrower remains the data owner,
* retrieval works reliably without creating a hidden plaintext copy in logs, caches, or debugging tooling.

### 2) Preventing AI leakage while keeping AI useful

The natural impulse is to feed documents to an LLM and ask for a score. That’s also the fastest way to create a governance nightmare.
So we had to:

* define exactly what outputs are allowed (score, flags, terms),
* build schema enforcement so only those outputs can move forward,
* separate “analysis” from “explanation,” so language generation never becomes data ingestion.

### 3) Making privacy constraints compatible with real loan ops

Loan workflows aren’t one-shot. You have partial repayments, reschedules, late events, disputes, and reconciliations. The challenge was modeling enough of this lifecycle to be credible while keeping the logic interpretable and auditable.

### 4) Proving it end-to-end inside a demo timeframe

Building privacy + workflow + integrations simultaneously forces constant tradeoffs: what to mock, what to implement, and what proof is strongest in a short demo. We focused on the proof that matters most: **can this run pay/repay with an audit trail while keeping docs under borrower control?**



---

## **Accomplishments that we're proud of**

 * We deployed a working **pay/repay workflow** as smart contracts, proving the execution backbone works in real-time.
 * We created a **modular platform demo** that combines privacy storage, AI scoring, and on-chain lifecycle tracking.
 * We built the pitch, UX, and technical framing to speak to real-world stakeholders, loan agents, banks, AWS strategists, and it resonated.
 * Most importantly, we proved you can run lending workflows **without ever turning borrower docs into plaintext liability**.

---

## **What we learned**

 * **AI without boundaries is dangerous.** You can’t just “plug in a model”, you need controls, schemas, and compute boundaries to make AI usable in regulated flows.
 * **Borrower-owned data changes how workflows look.** Instead of pulling everything into your system, you compute what you need and **leave the raw data where it belongs**.
 * **It’s not about replacing loan systems, it’s about building a safer spine** they can run on. That insight reshaped our product design.

籌資狀態

NA
團隊負責人
MMohammed Alawad
專案連結
行業
RWASocialFi