V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
V2EX  ›  Nazz  ›  全部回复第 8 页 / 共 42 页
回复总数  830
1 ... 4  5  6  7  8  9  10  11  12  13 ... 42  
@flynnlemon 这里只讨论 GC
@zhazi springboot data jdbc 那一段,看了半天都不知道 @Query 怎么用,去 github 看了下别人的 demo 一下就会了
@zhazi 不是公认的烂吗
我想到了, 再加一个堆, 以查询次数作为比较基准
MC 使用最小四叉堆高效地维护了过期时间, 但是只实现了 LRU, 命中率方面不如 LFU
@matrix1010 可以简单说下 Theine 是怎么维护过期时间和 LFU 吗?
@matrix1010 swiss table 的 gc 压力相比内置 map 怎么样?
说一下 springboot ,依赖注入自动化程度很高,但是,一旦出现问题就难以排查,而且官方文档稀巴烂
@matrix1010 otter 作者被你炸出来了
@maypok86 It seems to have triggered a bug in the otter, and it's slowing it down terribly.

go test -benchmem -run=^$ -bench . github.com/lxzan/memorycache/benchmark
goos: linux
goarch: amd64
pkg: github.com/lxzan/memorycache/benchmark
cpu: AMD Ryzen 5 PRO 4650G with Radeon Graphics
BenchmarkMemoryCache_Set-12 21004170 72.60 ns/op 9 B/op 0 allocs/op
BenchmarkMemoryCache_Get-12 43787251 40.11 ns/op 0 B/op 0 allocs/op
BenchmarkMemoryCache_SetAndGet-12 45939994 45.35 ns/op 0 B/op 0 allocs/op
BenchmarkRistretto_Set-12 12190314 122.2 ns/op 112 B/op 2 allocs/op
BenchmarkRistretto_Get-12 25565082 44.60 ns/op 16 B/op 1 allocs/op
BenchmarkRistretto_SetAndGet-12 11713868 97.06 ns/op 27 B/op 1 allocs/op
BenchmarkOtter_SetAndGet-12 13760 89816 ns/op 13887 B/op 0 allocs/op
PASS
ok github.com/lxzan/memorycache/benchmark 44.081s


func BenchmarkOtter_SetAndGet(b *testing.B) {
var builder, _ = otter.NewBuilder[string, int](1000)
builder.ShardCount(128)
mc, _ := builder.Build()
for i := 0; i < benchcount; i++ {
mc.SetWithTTL(benchkeys[i%benchcount], 1, time.Hour)
}

b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
var i = atomic.Int64{}
for pb.Next() {
index := i.Add(1) % benchcount
if index&7 == 0 {
mc.SetWithTTL(benchkeys[index], 1, time.Hour)
} else {
mc.Get(benchkeys[index])
}
}
})
}
@maypok86

> I'd like some kind of answer to this, to be honest, because I've been told so much about how only memorycache can fight ristretto, but so far it's been disappointing.

MC is just an obscure library, and I'm sure hardly anyone would say that.
@maypok86 The heap is used to quickly remove expired elements. Redis seems to randomly check for expired elements, which is not as efficient as the heap. In fact, users don't care if a library strictly implements the LRU algorithm, all they want is a KV store with TTL.

I've updated the benchmarks. My local cachebench hit test was too much of a pain in the ass to run, so I gave up on it.

goos: linux
goarch: amd64
pkg: github.com/lxzan/memorycache/benchmark
cpu: AMD EPYC 7763 64-Core Processor
BenchmarkMemoryCache_Set-4 7657497 133.3 ns/op 27 B/op 0 allocs/op
BenchmarkMemoryCache_Get-4 23179834 58.10 ns/op 0 B/op 0 allocs/op
BenchmarkMemoryCache_SetAndGet-4 20667798 59.09 ns/op 0 B/op 0 allocs/op
BenchmarkRistretto_Set-4 7739505 321.4 ns/op 135 B/op 2 allocs/op
BenchmarkRistretto_Get-4 12482400 97.67 ns/op 18 B/op 1 allocs/op
BenchmarkRistretto_SetAndGet-4 7265832 140.4 ns/op 31 B/op 1 allocs/op
PASS
ok github.com/lxzan/memorycache/benchmark 31.137s
@maypok86 It's stupid to try and convince people tirelessly. I don't see a problem with using indexed priority queues to implement memory caching with TTL, MC works well in my company's projects. I also don't see a problem with benchmarking based on random strings, redis keys often contain data ids. In MC, if you only use GetWithTTL and SetWithTTL, it's just LRU.
@maypok86 The only competitor to MemoryCache (MC) is Ristretto, neither of which is GC optimized. GC optimization is not all positive gain, Codec overhead is not small. MC seeks to replace Redis in light-use scenarios, where indexed priority queues are the best data structure, simple and efficient.
@nullcache 是土播鼠就行了
181 天前
回复了 Sylarlong 创建的主题 分享创造 紫微斗数 | 这应该是个首创功能吧?
能算财运吗
1 ... 4  5  6  7  8  9  10  11  12  13 ... 42  
关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   实用小工具   ·   4099 人在线   最高记录 6679   ·     Select Language
创意工作者们的社区
World is powered by solitude
VERSION: 3.9.8.5 · 30ms · UTC 04:06 · PVG 12:06 · LAX 21:06 · JFK 00:06
Developed with CodeLauncher
♥ Do have faith in what you're doing.