Breno Leitao <leitao@xxxxxxxxxx> writes: > This is a Sphinx extension that parses the Netlink YAML spec files > (Documentation/netlink/specs/), and generates a rst file to be > displayed into Documentation pages. > > Create a new Documentation/networking/netlink_spec page, and a sub-page > for each Netlink spec that needs to be documented, such as ethtool, > devlink, netdev, etc. > > Create a Sphinx directive extension that reads the YAML spec > (located under Documentation/netlink/specs), parses it and returns a RST > string that is inserted where the Sphinx directive was called. This is great! Looks like I need to fill in some missing docs in the specs I have contributed. I wonder if the generated .rst content can be adjusted to improve the resulting HTML. There are a couple of places where paragraph text is indented and I don't think it needs to be, e.g. the 'Summary' doc. A lot of the .rst content seems to be over-indented which causes blockquote tags to be generated in the HTML. That combined with a mixture of bullets and definition lists at the same indentation level seems to produce HTML with inconsistent indentation. I quickly hacked the diff below to see if it would improve the HTML rendering. I think the HTML has fewer odd constructs and the indentation seems better to my eye. My main aim was to ensure that for a given section, each indentation level uses the same construct, whether it be a definition list or a bullet list. It would be great to generate links from e.g. an attribute-set to its definition. Did you intentionally leave out the protocol values? It looks like parse_entries will need to be extended to include the type information for struct members, similar to how attribute sets are shown. I'd be happy to look at this as a follow up patch, unless you get there first. Thanks, Donald. diff --git a/Documentation/sphinx/netlink_spec.py b/Documentation/sphinx/netlink_spec.py index 80756e72ed4f..66ba9106b4ea 100755 --- a/Documentation/sphinx/netlink_spec.py +++ b/Documentation/sphinx/netlink_spec.py @@ -92,7 +92,7 @@ def parse_mcast_group(mcast_group: List[Dict[str, Any]]) -> str: """Parse 'multicast' group list and return a formatted string""" lines = [] for group in mcast_group: - lines.append(rst_paragraph(group["name"], 1)) + lines.append(rst_bullet(group["name"])) return "\n".join(lines) @@ -101,7 +101,7 @@ def parse_do(do_dict: Dict[str, Any], level: int = 0) -> str: """Parse 'do' section and return a formatted string""" lines = [] for key in do_dict.keys(): - lines.append(rst_bullet(bold(key), level + 1)) + lines.append(" " + bold(key)) lines.append(parse_do_attributes(do_dict[key], level + 1) + "\n") return "\n".join(lines) @@ -124,18 +124,19 @@ def parse_operations(operations: List[Dict[str, Any]]) -> str: for operation in operations: lines.append(rst_subsubtitle(operation["name"])) lines.append(rst_paragraph(operation["doc"]) + "\n") - if "do" in operation: - lines.append(rst_paragraph(bold("do"), 1)) - lines.append(parse_do(operation["do"], 1)) - if "dump" in operation: - lines.append(rst_paragraph(bold("dump"), 1)) - lines.append(parse_do(operation["dump"], 1)) for key in operation.keys(): if key in preprocessed: # Skip the special fields continue - lines.append(rst_fields(key, operation[key], 1)) + lines.append(rst_fields(key, operation[key], 0)) + + if "do" in operation: + lines.append(rst_paragraph(":do:", 0)) + lines.append(parse_do(operation["do"], 0)) + if "dump" in operation: + lines.append(rst_paragraph(":dump:", 0)) + lines.append(parse_do(operation["dump"], 0)) # New line after fields lines.append("\n") @@ -150,7 +151,7 @@ def parse_entries(entries: List[Dict[str, Any]], level: int) -> str: if isinstance(entry, dict): # entries could be a list or a dictionary lines.append( - rst_fields(entry.get("name"), sanitize(entry.get("doc")), level) + rst_fields(entry.get("name"), sanitize(entry.get("doc") or ""), level) ) elif isinstance(entry, list): lines.append(rst_list_inline(entry, level)) @@ -172,16 +173,16 @@ def parse_definitions(defs: Dict[str, Any]) -> str: for k in definition.keys(): if k in preprocessed + ignored: continue - lines.append(rst_fields(k, sanitize(definition[k]), 1)) + lines.append(rst_fields(k, sanitize(definition[k]), 0)) # Field list needs to finish with a new line lines.append("\n") if "entries" in definition: - lines.append(rst_paragraph(bold("Entries"), 1)) - lines.append(parse_entries(definition["entries"], 2)) + lines.append(rst_paragraph(":entries:", 0)) + lines.append(parse_entries(definition["entries"], 1)) if "members" in definition: - lines.append(rst_paragraph(bold("members"), 1)) - lines.append(parse_entries(definition["members"], 2)) + lines.append(rst_paragraph(":members:", 0)) + lines.append(parse_entries(definition["members"], 1)) return "\n".join(lines) @@ -201,12 +202,12 @@ def parse_attributes_set(entries: List[Dict[str, Any]]) -> str: # Add the attribute type in the same line attr_line += f" ({inline(type_)})" - lines.append(rst_bullet(attr_line, 2)) + lines.append(rst_bullet(attr_line, 1)) for k in attr.keys(): if k in preprocessed + ignored: continue - lines.append(rst_fields(k, sanitize(attr[k]), 3)) + lines.append(rst_fields(k, sanitize(attr[k]), 2)) lines.append("\n") return "\n".join(lines) @@ -218,7 +219,7 @@ def parse_yaml(obj: Dict[str, Any]) -> str: # This is coming from the RST lines.append(rst_subtitle("Summary")) - lines.append(rst_paragraph(obj["doc"], 1)) + lines.append(rst_paragraph(obj["doc"], 0)) # Operations lines.append(rst_subtitle("Operations"))