引言
在人工智能技术飞速发展的今天,大语言模型(llm)已成为开发者工具箱中不可或缺的一部分。然而,依赖云端api服务不仅存在数据隐私问题,还可能产生高昂成本。本文将介绍如何利用springboot、springai框架结合ollama本地大模型服务,搭建一个完全运行在本地windows环境下的智能问答系统。
技术栈概述
springboot与springai
springboot作为java生态中最流行的应用框架,提供了快速构建生产级应用的能力。springai是spring生态系统中的新兴成员,专门为ai集成设计,它简化了与各种大语言模型的交互过程,提供了统一的api接口。
ollama本地模型服务
ollama是一个开源项目,允许开发者在本地运行和管理大型语言模型。它支持多种开源模型,包括llama、mistral等,并提供了简单的api接口。通过ollama,我们可以在不依赖互联网连接的情况下使用强大的语言模型能力。
环境准备
硬件要求
windows 10/11操作系统
至少16gb ram(推荐32gb或以上)
nvidia显卡(可选,可加速推理)
软件安装
1.安装ollama:
访问ollama官网(https://ollama.ai
)下载windows版本并安装
2.验证ollama安装:
ollama list
项目搭建
创建springboot项目
使用spring initializr(https://start.spring.io)创建项目,选择以下依赖:
- spring web
- lombok
- spring ai (如未列出可手动添加)
配置pom.xml
确保包含springai ollama依赖:
<dependency> <groupid>org.springframework.ai</groupid> <artifactid>spring-ai-ollama-spring-boot-starter</artifactid> <version>0.8.1</version> </dependency>
应用配置
application.yml配置:
spring: ai: ollama: base-url: http://localhost:11434 chat: model: deepseek options: temperature: 0.7 top-p: 0.9
核心功能实现
问答服务层
创建qaservice类:
@service public class qaservice { private final ollamachatclient chatclient; public qaservice(ollamachatclient chatclient) { this.chatclient = chatclient; } public string generateanswer(string prompt) { return chatclient.call(prompt); } public flux<string> generatestreamanswer(string prompt) { return chatclient.stream(prompt); } }
控制器实现
qacontroller.java:
@restcontroller @requestmapping("/api/qa") public class qacontroller { private final qaservice qaservice; public qacontroller(qaservice qaservice) { this.qaservice = qaservice; } @postmapping("/ask") public responseentity<string> askquestion(@requestbody string question) { string answer = qaservice.generateanswer(question); return responseentity.ok(answer); } @getmapping(value = "/ask-stream", produces = mediatype.text_event_stream_value) public flux<string> askquestionstream(@requestparam string question) { return qaservice.generatestreamanswer(question); } }
提示工程优化
为提高回答质量,我们可以实现提示模板:
prompttemplateservice.java:
@service public class prompttemplateservice { private static final string qa_template = """ 你是一个专业的ai助手,请根据以下要求回答问题: 1. 回答要专业、准确 2. 如果问题涉及不确定信息,请明确说明 3. 保持回答简洁明了 问题:{question} """; public string buildprompt(string question) { return qa_template.replace("{question}", question); } }
更新qaservice使用提示模板:
public string generateanswer(string prompt) { string formattedprompt = prompttemplateservice.buildprompt(prompt); return chatclient.call(formattedprompt); }
高级功能实现
对话历史管理
实现简单的对话记忆功能:
conversationmanager.java:
@service @scope(value = webapplicationcontext.scope_session, proxymode = scopedproxymode.target_class) public class conversationmanager { private final list<string> conversationhistory = new arraylist<>(); public void addexchange(string userinput, string airesponse) { conversationhistory.add("用户: " + userinput); conversationhistory.add("ai: " + airesponse); } public string getconversationcontext() { return string.join("\n", conversationhistory); } public void clear() { conversationhistory.clear(); } }
更新提示模板以包含历史:
public string buildprompt(string question, string history) { return qa_template.replace("{history}", history) .replace("{question}", question); }
文件内容问答
实现基于上传文档的问答功能:
documentservice.java:
@service public class documentservice { private final resourceloader resourceloader; private final textsplitter textsplitter; public documentservice(resourceloader resourceloader) { this.resourceloader = resourceloader; this.textsplitter = new tokentextsplitter(); } public list<string> processdocument(multipartfile file) throws ioexception { string content = new string(file.getbytes(), standardcharsets.utf_8); return textsplitter.split(content); } public string extractrelevantparts(list<string> chunks, string question) { // 简化的相关性匹配 - 实际项目应使用嵌入向量 return chunks.stream() .filter(chunk -> chunk.tolowercase().contains(question.tolowercase())) .findfirst() .orelse(""); } }
添加文档问答端点:
@postmapping(value = "/ask-with-doc", consumes = mediatype.multipart_form_data_value) public responseentity<string> askwithdocument( @requestparam string question, @requestparam multipartfile document) throws ioexception { list<string> chunks = documentservice.processdocument(document); string context = documentservice.extractrelevantparts(chunks, question); string prompt = """ 基于以下文档内容回答问题: 文档相关部分: {context} 问题:{question} """.replace("{context}", context) .replace("{question}", question); string answer = qaservice.generateanswer(prompt); return responseentity.ok(answer); }
前端交互实现
简单html界面
resources/static/index.html:
<!doctype html> <html> <head> <title>本地ai问答系统</title> <script src="https://cdn.jsdelivr.net/npm/axios/dist/axios.min.js"></script> </head> <body> <h1>本地问答系统</h1> <div> <textarea id="question" rows="4" cols="50"></textarea> </div> <button onclick="askquestion()">提问</button> <div id="answer" style="margin-top: 20px; border: 1px solid #ccc; padding: 10px;"></div> <script> function askquestion() { const question = document.getelementbyid('question').value; document.getelementbyid('answer').innertext = "思考中..."; axios.post('/api/qa/ask', question, { headers: { 'content-type': 'text/plain' } }) .then(response => { document.getelementbyid('answer').innertext = response.data; }) .catch(error => { document.getelementbyid('answer').innertext = "出错: " + error.message; }); } </script> </body> </html>
流式响应界面
添加流式问答html:
<div style="margin-top: 30px;"> <h2>流式问答</h2> <textarea id="streamquestion" rows="4" cols="50"></textarea> <button onclick="askstreamquestion()">流式提问</button> <div id="streamanswer" style="margin-top: 20px; border: 1px solid #ccc; padding: 10px;"></div> </div> <script> function askstreamquestion() { const question = document.getelementbyid('streamquestion').value; const answerdiv = document.getelementbyid('streamanswer'); answerdiv.innertext = ""; const eventsource = new eventsource(`/api/qa/ask-stream?question=${encodeuricomponent(question)}`); eventsource.onmessage = function(event) { answerdiv.innertext += event.data; }; eventsource.onerror = function() { eventsource.close(); }; } </script>
性能优化与调试
模型参数调优
在application.yml中调整模型参数:
spring: ai: ollama: chat: options: temperature: 0.5 # 控制创造性(0-1) top-p: 0.9 # 核采样阈值 num-predict: 512 # 最大token数
日志记录
配置日志以监控ai交互:
@configuration public class loggingconfig { @bean public logger.level feignloggerlevel() { return logger.level.full; } @bean public ollamaapi ollamaapi(client client, objectprovider<httpmessageconvertercustomizer> customizers) { return new ollamaapiinterceptor(new ollamaapi(client, customizers)); } } class ollamaapiinterceptor implements ollamaapi { private static final logger log = loggerfactory.getlogger(ollamaapiinterceptor.class); private final ollamaapi delegate; public ollamaapiinterceptor(ollamaapi delegate) { this.delegate = delegate; } @override public generateresponse generate(generaterequest request) { log.info("ollama请求: {}", request); generateresponse response = delegate.generate(request); log.debug("ollama响应: {}", response); return response; } }
超时设置
配置连接超时:
spring: ai: ollama: client: connect-timeout: 30s read-timeout: 5m
安全加固
api认证
添加简单的api密钥认证:
securityconfig.java:
@configuration @enablewebsecurity public class securityconfig { @value("${app.api-key}") private string apikey; @bean public securityfilterchain securityfilterchain(httpsecurity http) throws exception { http .authorizehttprequests(auth -> auth .requestmatchers("/api/**").authenticated() .anyrequest().permitall() ) .addfilterbefore(new apikeyfilter(apikey), usernamepasswordauthenticationfilter.class) .csrf().disable(); return http.build(); } } class apikeyfilter extends onceperrequestfilter { private final string expectedapikey; public apikeyfilter(string expectedapikey) { this.expectedapikey = expectedapikey; } @override protected void dofilterinternal(httpservletrequest request, httpservletresponse response, filterchain filterchain) throws servletexception, ioexception { string apikey = request.getheader("x-api-key"); if (!expectedapikey.equals(apikey)) { response.senderror(httpservletresponse.sc_unauthorized, "无效的api密钥"); return; } filterchain.dofilter(request, response); } }
部署与运行
启动ollama服务
在windows命令行中:
ollama serve
运行springboot应用
在ide中直接运行主类,或使用maven命令:
mvn spring-boot:run
系统测试
访问 http://localhost:8080 测试问答功能,或使用postman测试api端点。
扩展思路
向量数据库集成
考虑集成chroma或milvus等向量数据库实现更精准的文档检索:
@configuration public class vectorstoreconfig { @bean public vectorstore vectorstore(embeddingclient embeddingclient) { return new simplevectorstore(embeddingclient); } @bean public embeddingclient embeddingclient(ollamaapi ollamaapi) { return new ollamaembeddingclient(ollamaapi); } }
多模型切换
实现动态模型选择:
@service public class modelselectorservice { private final map<string, chatclient> clients; public modelselectorservice( ollamachatclient deep seekclient, ollamachatclient llamaclient) { this.clients = map.of( "deep seek", deep seekclient, "llama", llamaclient ); } public chatclient getclient(string modelname) { return clients.getordefault(modelname, clients.get("deep seek")); } }
总结
本文详细介绍了如何使用springboot、springai和ollama在本地windows环境搭建一个功能完整的大模型问答系统。通过这个方案,开发者可以:
- 完全在本地运行ai服务,保障数据隐私
- 利用spring生态快速构建生产级应用
- 灵活选择不同的开源模型
- 实现基础的问答到复杂的文档分析功能
随着本地ai技术的不断进步,这种架构将为更多企业应用提供安全、可控的ai解决方案。读者可以根据实际需求扩展本文示例,如增加更多模型支持、优化提示工程或集成更复杂的业务逻辑。
到此这篇关于基于springboot+springai+ollama开发智能问答系统的文章就介绍到这了,更多相关springboot springai ollama实现智能问答内容请搜索代码网以前的文章或继续浏览下面的相关文章希望大家以后多多支持代码网!
发表评论