Yes, that's about it. Pranith pretty much summed up whatever I would have said.
-KrutikaOn Sat, Apr 22, 2017 at 12:25 PM, Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx> wrote:
+Krutika for any other inputs you may need.--On Sat, Apr 22, 2017 at 12:21 PM, Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx> wrote:Sorry for the delay. The only internal process that we know would take more time is self-heal and we implemented a feature called granular entry self-heal which should be enabled with sharded volumes to get the benefits. So when a brick goes down and say only 1 in those million entries is created/deleted. Self-heal would be done for only that file it won't crawl the entire directory.On Wed, Apr 12, 2017 at 8:11 PM, David Spisla <david.spisla@xxxxxxxxxxxx> wrote:______________________________Dear Gluster-Community,
If I use the shard feature it may happen that I will have a huge number of shard-chunks in the hidden folder .shard
Does anybody has some experience what is the maximum number of files in one .shard-Folder?
If I have 1 Million files in such a folder, some operations like self-healing or another internal operations would need
a lot of time, I guess.
Sincerely
David Spisla
Software Developer
Tel: +49 761-590 34 841
iTernity GmbH
Heinrich-von-Stephan-Str. 21
79100 Freiburg – Germany
---
unseren technischen Support erreichen Sie unter +49 761-387 36 66
---Geschäftsführer: Ralf Steinemann
Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332
USt.Id de-24266431
_________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
--PranithPranith
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users