I've faced with some performance issues developing my readonly filesystem using fskit.
For below screenshot:
- enumerateDirectory returns two hardcoded items, compiled with release config
- 3000 readdirsync are done from nodejs.
- macos 15.5 (24F74)
I see that getdirentries syscall takes avg 121us.
Because all other variables are minimised, it seems like it's fskit<->kernel overhead.
This itself seems like a big number. I need to compare it with fuse though to be sure.
But what fuse has and fskit seams don't (I checked every page in fskit docs) is kernel caching.
Fuse supports:
- caching lookups (entry_timeout)
- negative lookups (entry_timeout)
- attributes (attr_timeout)
- readdir (via opendir cache_readdir and keep_cache)
- read and write ops but thats another topic.
And afaik it works for both readonly and read-write file systems, because kernel can assume (if client is providing this) that cache is valid until kernel do write operations on corresponding inodes (create, setattr, write, etc).
Questions are:
- is 100+us reasonable overhead for fskit?
- is there any way to do caching by kernel. If not currently, any plans to implement?
Also, additional performance optimisation could be done by providing lower level api when we can operate with raw inodes (Uint64), this will eliminate overhead from storing, removing and retrieving FSItems in hashmap.