问题起因:
在大量插入数据时,通过for单条插入耗时较长,尤其还涉及事务时
解决方案:
通过函数编程java.util.function下的Consumer,来接收mapper接口,在工具类中对数据进行分批次切割(因为如果单条插入过多数据会导致时间反升不降),而且数据库对SQL长度也有限制,不是无线长的,一般批次保持到500-1000,具体示业务数据量而定。
代码展示:
工具类
import java.util.ArrayList; import java.util.List; import java.util.function.Consumer; public class DatabaseUtils { private static final int maxInsertItemNumPerTime = 500; public static <T> void batchSplitInsert(List<T> list, Consumer<? super List<T>> insertFunc) { if (list == null || list.isEmpty()) { return; } List<List<T>> all = new ArrayList<>(); if (list.size() > maxInsertItemNumPerTime) { int i = 0; while (i < list.size()) { List<T> subList; if (i + maxInsertItemNumPerTime > list.size()) { subList = list.subList(i, list.size()); } else { subList = list.subList(i, i + maxInsertItemNumPerTime); } i = i + maxInsertItemNumPerTime; all.add(subList); } all.parallelStream().forEach(insertFunc); } else { insertFunc.accept(list); } } }
Service层调用(省略了Mapper接口设计)
public int insertBatchBatchAdaWorks(List<AdaBatchWorks> adaBatchWorksList) { Consumer<List<AdaBatchWorks>> consumer = o -> adaBatchWorksMapper.insertBatchBatchAdaWorks(o); DatabaseUtils.batchSplitInsert(adaBatchWorksList, consumer); return adaBatchWorksList.size(); }
Mapper.xml
<insert id="insertBatchBatchAdaWorks" parameterType="java.util.List"> INSERT INTO ada_batch_works (ada_batch_id, ada_works_id, ada_works_adopt_count,ada_works_refuse_count,create_by,create_time) VALUES <foreach collection="list" item="item" separator=","> (#{item.adaBatchId}, #{item.adaWorksId}, #{item.adaWorksAdoptCount},#{item.adaWorksRefuseCount},#{item.createBy},#{item.createTime}) </foreach> </insert>