Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: SOLR-12519 #416

Closed
wants to merge 13 commits into from
Closed

WIP: SOLR-12519 #416

wants to merge 13 commits into from

Conversation

moshebla
Copy link
Contributor

No description provided.

* [child parentFilter="fieldName:fieldValue" childFilter="fieldName:fieldValue"]
* [child parentFilter="fieldName:fieldValue" childFilter="fieldName:fieldValue" limit=20]
*/
public class DeeplyNestedChildDocTransformerFactory extends TransformerFactory {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW I'm not convinced we need a distinct class from ChildDocTransformer. I think CDT is fine... the part that changes is what follows obtaining the matching child doc iterator. That could split out to two different methods -- plain/flat (current) and nested with ancestors (new).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure thing, I will merge the two transformers

: fieldVal.toString();
}

protected static class NullFilteringArrayList<T> extends ArrayList<T> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Um; this ArrayList subclass seems undesirable to me; I'm not sure yet why it's used (though I'm sure you have your reasons)... but maybe there could be some change to the logic to avoid the need for this thing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used this since the paths in the documents are stored as array indexes. In case some documents from the array were filtered, I insert null values, so the child docs that matched the filter will be in the same array index as they were originally. Inside the iterator method, the null values are filtered, so they don't get written by the response writer.

}

@Override
public void transform(SolrDocument rootDoc, int docid) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a going to be a real challenge to come up with something that is understandable (doesn't appear too complicated). Right now it appears too complicated to me; it's hard to review. Tonight I think I'll propose some pseudocode on how to structure it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure thing, any help would be welcome.
This is a pretty rough draft, so I will try to work on this ASAP before I add more logic to the transformer.

@moshebla
Copy link
Contributor Author

moshebla commented Jul 23, 2018

This is only a start, I will merge these changes into the main ChildDocTransformerFactory.
This new algorithm is simpler, just like you suggested, although it contains a few minor implementation changes.

@dsmiley
Copy link
Contributor

dsmiley commented Jul 23, 2018

Before I look further I want to mention 2 things:

  • There needn't be separation between the non-nested and nested algorithm. We could simply use the nested algorithm but special-case when the SortedDocValues we get is null due to non-existent path. In that event, all docs get added to the root, anonymously (unlabelled). I suspect doing it this way is less code and will be sufficiently clear but we'll see?
  • My pseudocode had a TODO about double-resolving the path in order to get the label at the time the docs are added to the parent. If we make the pending data structure a Multimap<Integer,Multimap<String,SolrDocument>> then at the time we read the document and get the path we could store the relationship at this time. Or perhaps you have a better idea. I thought of putting the label into a temporary field of the SolrDocument; though I'm not sure I like that any better; maybe less.

@moshebla
Copy link
Contributor Author

In my implementation I have used a Multimap<String,SolrDocument> for the pending data, so we can decide if we put the number in the array of the parent's key. That way, There can be an option to add all parents if the child documents parent is inside a list, or only the child doc's direct parent. This can be done by the way we store the pending child doc's key, with or with out the array index.

@dsmiley
Copy link
Contributor

dsmiley commented Jul 23, 2018

When you refer to the "array index", what do you mean? Do you mean a DocValue ord? Do you mean the '#2' or whatever child index? AFAIK this code shouldn't care about child index.

@moshebla
Copy link
Contributor Author

I was referring to the '#2'.
Imagine a query for a social network, where all the posts comments made by a specific user are required, including the ones he commented on, which would not ,atch the ChildFilter commnent.author:Joe. To display these comments we would need to bring all the hierarchy above.
In another scenario one might want to get statistics of how many comments, including replies Joe commented on each post. This can be done using just the Path, excluding the array index, so isAncestor returns true.

@moshebla
Copy link
Contributor Author

I will push another commit soon.

@dsmiley
Copy link
Contributor

dsmiley commented Jul 23, 2018

Imagine a query for a social network, where all the posts comments made by a specific user are required, including the ones he commented on, which would not match the ChildFilter comment.author:Joe.

Assuming a comment on another comment is a parent/child relationship (i.e. the recursive comments are ancestors), wouldn't our doc transformer return those ancestors? We're expressly writing it to return those ancestors so it would. The ancestor path ord IDs will refer to paths distinguishing the comment occurrences, e.g. might resolve to a path looking something like comment#3/comment#9/comment#1/ and thus we don't mix up which comment is on which.

@dsmiley
Copy link
Contributor

dsmiley commented Jul 23, 2018

I think this issue should only be about returning a nested hierarchy. That at least seems clear -- something everyone would want. But computing stats... wow thats a large scope increase that deserves its own issue. Lets table that.

@moshebla
Copy link
Contributor Author

Sure thing it won't mix up the comments, but what if the user also requests all the comments which in the same thread, thus being in the same array just a path above?

@dsmiley
Copy link
Contributor

dsmiley commented Jul 23, 2018

what if the user also requests all the comments which in the same thread, thus being in the same array just a path above?

Sounds like a distinct issue. It'd add some complexity... for example the very first child doc ID might be the 2nd child of its parent. We'd need to grab the first doc of the parent, which would have a docID earlier than we have even seen.

@moshebla
Copy link
Contributor Author

OK, We'll scratch that for now, and discuss this in a separate issue.

@moshebla
Copy link
Contributor Author

I found this nasty bug in AtomicUpdateDocumentMerger#isAtomicUpdate, where single child docs(non array) were mistaken for an atomicUpdate.

@dsmiley
Copy link
Contributor

dsmiley commented Jul 24, 2018

RE isAtomicUpdate: Good catch! Please add a test to call this; perhaps it's just one additional line to an existing test.

@moshebla
Copy link
Contributor Author

I added a new test TestDeeplyNestedChildDocTransformer#testSingularChildFilterJSON to test this.

@@ -61,6 +65,10 @@
*/
public class ChildDocTransformerFactory extends TransformerFactory {

public static final String PATH_SEP_CHAR = "/";
public static final String NUM_SEP_CHAR = "#";
private static final String sRootFilter = "*:* NOT " + NEST_PATH_FIELD_NAME + ":*";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're discussing indexing NEST_PATH_FIELD_NAME tokenized, which would make this sRootFilter query more expensive. Additionally, there might not be a nest path for users not tracking the deep nested stuff. Lets instead use this: *:* NOT _ROOT_:* since that field is also only populated for child docs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ROOT is added to root docs too now, so I guess we can't use this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I forgot. Perhaps instead create the Lucene Query directly (rather than create a String that needs to be parsed). Use a BooleanQuery with a MUST_NOT clause around DocValuesFieldsExistsQuery for the nest path field? And comment this only applies to schemas that are enabled for this enhanced nesting.

@@ -70,7 +78,8 @@ public DocTransformer create(String field, SolrParams params, SolrQueryRequest r
}

String parentFilter = params.get( "parentFilter" );
if( parentFilter == null ) {
boolean buildHierarchy = params.getBool("hierarchy", false);
if( parentFilter == null && !buildHierarchy) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we can relax this constraint; wouldn't sRootFilter be appropriate?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, if we want this change to apply to the regular ChildDocumentTransformer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if sRootFilter needs to depend on a nest field, and if nest fields aren't required, I guess we can't relax this constraint.

}
}

protected static String buildHierarchyChildFilterString(String queryString) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When writing parsing code like this, it helps tremendously to add a comment showing the example input. Here you could also comment on what the resulting query would be.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remember to provide input/output example. I think this is where the PathHierarchyTokenizer might come into play... and our discussions on the JIRA issue about that hierarchy. Can we table this for now and do in a follow-up issue? (i.e. have no special syntax right now). I'm just concerned the scope of this may be bigger than limited to this doc transformer since presumably users will want to do join queries using this syntax as well. And this touches on how we index this; which is kinda a bigger discussion than all the stuff going on already in this issue. And this'll need to be documented in the Solr Ref Guide well.

}

void addChildToParent(SolrDocument parent, SolrDocument child) {
String docPath = getSolrFieldString(child.getFirstValue(NEST_PATH_FIELD_NAME), schema.getFieldType(NEST_PATH_FIELD_NAME));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should assume that materialized SolrDocument contains any particular field. The nest fields are internal details and furthermore one day this transformer will likely have an "fl". So this means you should use DocValues to fetch this field.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, I actually changed it so we don't fetch the path twice, but I guess you're right

return path.substring(path.lastIndexOf(PATH_SEP_CHAR.charAt(0)) + 1);
}

private String trimSuffixFromPaths(String path) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we don't need this method; we'll see. I believe the goal of this method is only to trim off the trailing pound then number? I'd rather you use String.lastIndexOf type calls rather than a regexp for this simple task. Assume '#' is a special care thus simply assume what follows is the child index. Also remember to comment on the method an example to clearly indicate what it's doing

// lookup leaf key for these children using path
// depending on the label, add to the parent at the right key/label
// TODO: unfortunately this is the 2nd time we grab the paths for these docs. resolve how?
String trimmedPath = trimSuffixFromPaths(getLastPath(label));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused here. Firstly, it's curious to see getLastPath(label) -- since label should not have a path if it is a label. A label is a plain string key/label joining a parent to child like "comment". Secondly, why the trimSuffixFromPaths... I don't get the point. As I've said, comments of input/output sample values can help a ton in knowing what's going on.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't want the # and trailing number to be added to the fieldName represented in the document hierarchy, so trimSuffixFromPath is used.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. So I propose renaming the "label" variable to be "nestPath" since that's what it seems to be. getLastPath is fine. trimSuffixFromPaths should probable be renamed to trimLastChildIndex. Then, "trimmedPath" variable would in fact be the "label" so should be named as such.

@@ -567,7 +567,17 @@
<field name="_root_" type="string" indexed="true" stored="true"/>
<!-- required for NestedUpdateProcessor -->
<field name="_nest_parent_" type="string" indexed="true" stored="true"/>
<field name="_nest_path_" type="string" indexed="true" stored="true"/>
<field name="_nest_path_" type="descendants_path" indexed="true" multiValued="false" docValues="true" stored="false" useDocValuesAsStored="true"/>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think useDocValuesAsStored=false. This is internal and shouldn't be returned when someone does fl=*

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure thing, will fix ASAP

@Test
public void testParentFilterJSON() throws Exception {
indexSampleData(10);
String[] tests = new String[] {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I've suggested in a previous issue, I think it's simpler and more complete to test by evaluating the entire result as a string compared to an expected string. Otherwise, as a reviewer I'm asking myself what assertions would be useful for this query that you didn't think of. I think this test philosophy here is especially valuable because fundamentally you're building this interesting structured response. Essentially everything in the response is pertinent to the test -- contains very little immaterial noise.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have just updated this test, hopefully it is a lot better now.

// load the doc
SolrDocument doc = DocsStreamer.convertLuceneDocToSolrDoc(docFetcher.doc(docId),
schema, new SolrReturnFields());
doc.setField(NEST_PATH_FIELD_NAME, fullDocPath);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This approach kinda requires we remove such internal fields later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we figure out a way to store it somewhere else?
Or is this acceptable to remove it later?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's probably easiest to remove it later, probably at the point we add children to document. Or try the Multimap<String<MultiMap<String,SolrDocument>> idea I had, wherein the intermediate String is the label.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we can Store everything in a Multimap<String,Pair<String,SolrDocument>>, thus saving each document bonded to its full path?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A Pair would be fine too; it's only slightly more bulky than an intermediate Multimap.

Copy link
Contributor Author

@moshebla moshebla Jul 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like a MultiMap<String, Pair<String,Pair>> is even more efficient, since each path is unique, while the inner MultiMap will store each item in each child label as another Collection holding a single value.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expected the intermediate string is a label like “comment” not a path. A label will be shared by all children under that label.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I also considered that but I'm still thinking of a clever way to notify is it was a single child or an array, which would only be known at map insert time, since we trim the index (or lack of) when we insert the childDoc to the map.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, ok. Perhaps have a separate standalone Set of complete paths of "single child" elements. that tracks parent path to child labels in which that label contains a unique entry. WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be clear, I just mean a Set<String> of paths like foo#1/bar#9/author# (author is single valued)

void addChildToParent(SolrDocument parent, SolrDocument child, String label) {
// lookup leaf key for these children using path
// depending on the label, add to the parent at the right key/label
// TODO: unfortunately this is the 2nd time we grab the paths for these docs. resolve how?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this TODO is not an issue based on your placement of the path in the doc.

@moshebla
Copy link
Contributor Author

I tried optimizing the lookup and insertion of child documents.
Hopefully I'll get more time tomorrow to get the tests up to scratch.

private static final String sRootFilter = "*:* NOT " + NEST_PATH_FIELD_NAME + ":*";
private static final BooleanQuery rootFilter = new BooleanQuery.Builder()
.add(new BooleanClause(new MatchAllDocsQuery(), BooleanClause.Occur.MUST))
.add(new BooleanClause(new WildcardQuery(new Term(NEST_PATH_FIELD_NAME, new BytesRef("*"))), BooleanClause.Occur.MUST_NOT)).build();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I suggested using DocValuesExistsQuery which only depends on the existence of DocValues and should be efficient

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My bad, I'll fix this in the next commit.

@@ -108,12 +103,11 @@ public void transform(SolrDocument rootDoc, int rootDocId) {
final LeafReaderContext leafReaderContext = searcher.getIndexReader().leaves().get(seg);
final SortedDocValues segPathDocValues = DocValues.getSorted(leafReaderContext.reader(), NEST_PATH_FIELD_NAME);

Multimap<String,SolrDocument> pendingParentPathsToChildren = ArrayListMultimap.create();
Map<String, Multimap<String, SolrDocument>> pendingParentPathsToChildren = new HashMap<>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah good catch; yes a Map and not MultiMap at outer level once we add the label intermediate

}
pendingParentPathsToChildren.computeIfAbsent(parentDocPath, x -> ArrayListMultimap.create())
.put(trimIfSingleDoc(getLastPath(fullDocPath)), doc); // multimap add (won't replace)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we simply want the label here; maybe add a method getLeafLabel() that takes a path and returns the leaf label. It would trim off any "#" with digits no matter if it's a single doc or not.

List<SolrDocument> list = new ArrayList<>();
parent.setField(trimmedPath, list);
void addChildrenToParent(SolrDocument parent, Multimap<String, SolrDocument> children) {
for(String childLabel: children.keySet()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is likely a way to iterate over Map.Entry or similar so you don't have to "get" each in the next line.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems like MultiMap entries are not returned as the collections for each key, but are instead returned as entries with each key added to it. It seems more efficient to add the whole array at once, reducing the amount of lookups that have to be made for each child. We could do this in one simple look up.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see (I didn't look at Multimap's iteration options when I wrote that). Your code here is good.

@@ -190,8 +179,18 @@ private String getLastPath(String path) {
return path.substring(path.lastIndexOf(PATH_SEP_CHAR.charAt(0)) + 1);
}

private String trimSuffixFromPaths(String path) {
return path.replaceAll("#\\d|#", "");
private String trimIfSingleDoc(String path) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll assume this is still WIP as we discussed wanting to use a Set of child docs, and thus we wouldn't trim conditionally for single doc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need a set though?
If we trim only the array docs we can then easily figure out at insert time whether the item is a single doc or an array, optimizing the insert time, and removing the need for a set

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh gotcha -- sure. It does complicate an explanation of what that String is, as it won't simply be a label, but that's okay. Please add comments where we declare the field to explain this stuff. (e.g.. what the outer String is (with example), what the intermediate string is (with 2 examples)).

public static final String NUM_SEP_CHAR = "#";
private static final BooleanQuery rootFilter = new BooleanQuery.Builder()
.add(new BooleanClause(new MatchAllDocsQuery(), BooleanClause.Occur.MUST))
.add(new BooleanClause(new WildcardQuery(new Term(NEST_PATH_FIELD_NAME, new BytesRef("*"))), BooleanClause.Occur.MUST_NOT)).build();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remember again to use DocValuesExistsQuery

} catch (SyntaxError syntaxError) {
throw new SolrException( ErrorCode.BAD_REQUEST, "Failed to create correct parent filter query" );
}

Query childFilterQuery = null;
if(childFilter != null) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code flow from here to the end of the method looks very awkward to me. I think the top "if" condition should test for buildHierarchy that we the nested and non-nested cases are clearly separated. Do you think that would be clear?

}
}

protected static String buildHierarchyChildFilterString(String queryString) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remember to provide input/output example. I think this is where the PathHierarchyTokenizer might come into play... and our discussions on the JIRA issue about that hierarchy. Can we table this for now and do in a follow-up issue? (i.e. have no special syntax right now). I'm just concerned the scope of this may be bigger than limited to this doc transformer since presumably users will want to do join queries using this syntax as well. And this touches on how we index this; which is kinda a bigger discussion than all the stuff going on already in this issue. And this'll need to be documented in the Solr Ref Guide well.

import static org.apache.solr.response.transform.ChildDocTransformerFactory.PATH_SEP_CHAR;
import static org.apache.solr.schema.IndexSchema.NEST_PATH_FIELD_NAME;

class DeeplyNestedChildDocTransformer extends DocTransformer {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As with our URP, lets forgo the "Deeply" terminology. I hope this will simply be how any nested docs in the future are done rather than making a distinction.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I just delete the old transformer and make this one the new ChildDocTransformer?
Or should I give it a new name like ChildDocHierarchyTransformer?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace/update ChildDocTransformer, I think.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would the transformer need to support the old method of adding childDocuments to the childDocuments field?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes; it can do both easily enough I think? A separate method could take over for the existing/legacy case.

@Override
public String[] getExtraRequestFields() {
// we always need the idField (of the parent) in order to fill out it's children
return new String[] { idField.getName() };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh? I didn't know we cared at all what the ID is.

Copy link
Contributor Author

@moshebla moshebla Jul 29, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FieldType idFt = idField.getType();
String rootIdExt = getSolrFieldString(rootDoc.getFirstValue(idField.getName()), idFt);

Doesn't this mean we need the root document's ID?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps since root is added to every document, we could use that field instead of the ID field?
This is just a thought that popped into my head.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh right; I forgot we needed an ID to do the child doc query limited by this parent ID. Please add a comment. I don't think we should bother with root. I guess ChildDocTransformer might break if the id field is not stored but did have docValues. That's a shame; it deserves a TODO.

}

private String getLastPath(String path) {
if(path.lastIndexOf(PATH_SEP_CHAR.charAt(0)) == -1) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are double-calculating the lastIndexOf in this method.

* Returns the *parent* path for this document.
* Children of the root will yield null.
*/
String lookupParentPath(String currDocPath) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this method should be getParentPath; "lookup" is suggestive of needing to look inside some data structure to find something. Or alternatively, trimLastPath. And again, a input-output example helps nicely.

}

private String getPathByDocId(int segDocId, SortedDocValues segPathDocValues) throws IOException {
int numToAdvance = segPathDocValues.docID()==-1?segDocId: segDocId - (segPathDocValues.docID());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was confusing me for a minute until I ultimately figured out it doesn't matter since it's only used in your assert. Do you think this is helpful at all or just drop it?

Copy link
Contributor Author

@moshebla moshebla Jul 29, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes,
since if we have not set the docId (the method returns -1), the iterator will advance by one doc too many.


assertJQ(req("q", "type_s:cake",
"sort", "id asc",
"fl", "*,[child hierarchy=true childFilter='lonely" + PATH_SEP_CHAR + "lonelyGrandChild" + PATH_SEP_CHAR + "test2_s:secondTest']"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer you not parameterize PATH_SEP_CHAR or any other syntactical/format constructs. It adds noise that make it harder to read simple strings.


private static String grandChildDocTemplate(int id) {
int docNum = id / 8; // the index of docs sent to solr in the AddUpdateCommand. e.g. first doc is 0
return "SolrDocument{id=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<id:" + id + ">, type_s=[stored,indexed,tokenized,omitNorms,indexOptions=DOCS<type_s:" + types[docNum % types.length] + ">], name_s=[stored,indexed,tokenized,omitNorms,indexOptions=DOCS<name_s:" + names[docNum % names.length] + ">], " +
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmmm; this isn't quite what I had in mind. The "stored,indexed,tokenized,..." stuff is not what I expected. Apparently, a document.toString may not be a good a choice. Can you JSONify this? Either query Solr for a JSON response and use that (this approach is semi-common in Solr tests, e.g. TestJsonFacets line 192 (params to calling testJQ()) , or use JSONWriter.writeSolrDocument() somehow to avoid a search if you want to go that route?

It's too bad we have to see both id & root fields in the document; it'd be nice if the child doc transformer had "fl".

Ultimately, we'd like to see something easy to read -- a JSON structured nested docs with as little noise as possible.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option, perhaps easiest, is to pre-process to Solr document before calling toString. This could remove "noise" fields like root. If an IndexableField is there then replace with stringValue().

Copy link
Contributor Author

@moshebla moshebla Jul 29, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this method go in the TestUtils or in this specific test's class?

"_root_=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<_root_:" + id + ">}, " +
"SolrDocument{id=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<id:" + (id + 7) + ">, name_s=[stored,indexed,tokenized,omitNorms,indexOptions=DOCS<name_s:cocoa>], _nest_parent_=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<_nest_parent_:" + (id + 5) + ">, " +
"_root_=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<_root_:" + id + ">}]}]}";
return "SolrDocument{id="+ id + ", type_s=[" + types[docNum % types.length] + "], name_s=[" + names[docNum % names.length] + "], " +
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a shame to have all this embedded ID calculation business. During cleaning can they be removed (both "id" and nest parent and root) and we still have enough distinguishing characteristics of the docs to know which is which? Seems that way. It adds a lot of noise.

Copy link
Contributor Author

@moshebla moshebla Jul 30, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we should only keep the ID?
I would prefer to have one unique key to make sure the documents are for sure placed under the right parent. Hopefully that will clean most of the noise.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keeping one ID is fine; we certainly don't need additional ones. Maybe consider using letters or names for IDs instead of incrementing counters. Anything to help make reading a doc/child structure more readily apparent. Anything to reduce string interpolation here is also a win IMO.

"/response/result/doc[1]/doc[4]/str[@name='id']='5'" ,
"/response/result/doc[1]/doc[5]/str[@name='id']='6'" ,
"/response/result/doc[1]/doc[6]/str[@name='id']='7'"};
"/response/result/doc[1]/arr[@name='_childDocuments_']/doc[1]/str[@name='id']='2'" ,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One major difference is the way nested XML is returned, since now it has a key it belongs to.
Thought it would make the review process easier for you if I pointed these differences out.

@@ -242,10 +242,10 @@ private void testChildDocNonStoredDVFields() throws Exception {
"fl", "*,[child parentFilter=\"subject:parentDocument\"]"), test1);

assertJQ(req("q", "*:*", "fq", "subject:\"parentDocument\" ",
"fl", "subject,[child parentFilter=\"subject:parentDocument\" childFilter=\"title:foo\"]"), test2);
"fl", "id,_childDocuments_,subject,intDvoDefault,[child parentFilter=\"subject:parentDocument\" childFilter=\"title:foo\"]"), test2);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another major difference is that now since child documents are stored as fields, the user needs to explicitly add them to the list of return fields. I have no opinion regarding this issue, but this might pose a problem to some use cases. Perhaps documenting this would be enough?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah; interesting. It's logical. Is this only needed for anonymous child docs (thus _childDocuments_ or any/all possible relationship names that aren't necessarily just at the root level but anywhere in the hierarchy? Perhaps this is where that "anonChildDocs" ought to come into play again for backwards-compatibility sake? Well perhaps not... someone who is using anonymous child docs today will not have the nested field metadata and thus the old logic will kick in and ensure child documents are added as it was; right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what I would expect would happen. I am going to try it to check this works as expected. Would not want to suddenly change this for everyone 😟

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to add a small addition and indeed, the behaviour was as expected.
I added a test to ensure there is backwards compatibility with the old XML format here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking about what you said, and perhaps we could add the old format using if the user did not index the field meta-data. I prefer having a flag, since the user can query separate two collections and get the same output if desired, instead of having to deal with two different XML formats.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think either the user hasn't yet started using the new key'ed/labeled style of child documents, or they have updated completely. It's a migration to a new way you either do or don't do (and perhaps one day will not have a choice).

Copy link
Contributor Author

@moshebla moshebla Aug 9, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course, one day this will probably get removed. Right now the trigger to get the old XML format is the anonChildDocs flag.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm really not a fan of {{anonChildDocs}} flag; I regret I conjured up the idea. If we have "nest" schema fields, the user wants nested documents (including field/label association), if the schema doesn't it ought to work as it used to. I think this is straight-forward to reason about and document.

@@ -32,7 +32,7 @@
import org.junit.BeforeClass;
import org.junit.Test;

public class TestDeeplyNestedChildDocTransformer extends SolrTestCaseJ4 {
public class TestChildDocTransformerHierarchy extends SolrTestCaseJ4 {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These "hierarchy" tests are in a separate class since they require a different schema and configuration to be loaded upon start-up.

@moshebla
Copy link
Contributor Author

moshebla commented Aug 9, 2018

I hope this is in a good enough shape for review.
There was an overlap between this ticket and the XMLLoader, so I hope I did not miss anything.

@dsmiley
Copy link
Contributor

dsmiley commented Aug 9, 2018

This is pretty close; it has come far. I've applied the patch and started to make some manipulations. I just have some questions, which I'll ask inline. Please don't push new changes as it'd be hard to integrate. I could either post a patch or... maybe I should push a feature branch to my fork on GH which would display diffs from yours. Hmm; does GitHub allow you to show a diff between your feature branch and another feature branch on another fork?

assertJQ(req("q", IndexSchema.NEST_PATH_FIELD_NAME + ":*/grandChild#*",
"fl","*",
assertJQ(req("q", IndexSchema.NEST_PATH_FIELD_NAME + ":*/grandChild",
"fl","*, _nest_path_",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change (and others here) to a test of an URP that isn't modified in this issue underscores my point made previously having the test for that URP be more of a unit test of what the URP produces (test the SolrInputDocument), and not executing queries. I'm not saying it was wrong to make this change in this issue but just want you to reflect on the ramifications of these choices.

import org.junit.BeforeClass;
import org.junit.Test;

public class TestChildDocTransformerHierarchy extends SolrTestCaseJ4 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm glad you have split out this test. I want to further create a new "schema-nest.xml" for our tests that explicitly deal with this new nested document stuff. This test will use it, as well as TestNestedUpdateProcessor. schema15.xml can be reverted to as it was. TestChildDocTransformer can continue to use the legacy schema and as-such we can observe that our additions to the underlying code don't disturb anyone who is using it that is unaware of the new nested stuff.

@@ -242,10 +242,10 @@ private void testChildDocNonStoredDVFields() throws Exception {
"fl", "*,[child parentFilter=\"subject:parentDocument\"]"), test1);

assertJQ(req("q", "*:*", "fq", "subject:\"parentDocument\" ",
"fl", "subject,[child parentFilter=\"subject:parentDocument\" childFilter=\"title:foo\"]"), test2);
"fl", "id,_childDocuments_,subject,intDvoDefault,[child parentFilter=\"subject:parentDocument\" childFilter=\"title:foo\"]"), test2);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm really not a fan of {{anonChildDocs}} flag; I regret I conjured up the idea. If we have "nest" schema fields, the user wants nested documents (including field/label association), if the schema doesn't it ought to work as it used to. I think this is straight-forward to reason about and document.

} catch (IOException e) {
doc.put(name, "Could not fetch child Documents");
protected static String buildHierarchyChildFilterString(String queryString) {
List<String> split = StrUtils.splitSmart(queryString, ':');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI I've rewritten this method to not use splitting & joining in cases like this which can use "indexOf"/"lastIndexOf". I've also increased it's robustness to more complicated query examples of multiple conditions.

For now I think we shouldn't document this; let it be kinda a secret feature until we can query (in q/fq) in like-kind.

private static String getPathByDocId(int segDocId, SortedDocValues segPathDocValues) throws IOException {
int numToAdvance = segPathDocValues.docID()==-1?segDocId: segDocId - (segPathDocValues.docID());
assert numToAdvance >= 0;
assert segPathDocValues.advanceExact(segDocId);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Woah; we want to call advanceExact even if assertions are disabled -- like, you know, in production/real-world; not just tests. Thankfully, Lucene/Solr's randomized test infrastructure actually randomly disables assertions and thus would catch this (eventually) if I didn't.

@@ -61,6 +57,13 @@
*/
public class ChildDocTransformerFactory extends TransformerFactory {

public static final String PATH_SEP_CHAR = "/";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see why these are declared as Strings and not char. It's clumsy later to do charAt(0)

<field name="_nest_path_" type="descendants_path" indexed="true" multiValued="false" docValues="true" stored="false" useDocValuesAsStored="false"/>
<fieldType name="descendants_path" class="solr.SortableTextField">
<analyzer type="index">
<charFilter class="solr.PatternReplaceCharFilterFactory" pattern="(^.*.*$)" replacement="$0/"/>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please provide an example input String (here in GH) that has multiple levels and comment out it looks like when it's done? I know how to read regexps but I need to stare at them a bit to figure them out, so lets make it easier to read/review.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure thing

asfgit pushed a commit that referenced this pull request Aug 10, 2018
Took PR #416 (Moshe) and went further with it
@moshebla
Copy link
Contributor Author

It has been a while since I last worked on this,
and after the merge with your branch some comments were overwritten.
Do you remember what changes are still needed?

@@ -81,8 +81,8 @@ private void testChildDoctransformerXML() {

String test3[] = new String[] {
"//*[@numFound='1']",
"/response/result/doc[1]/doc[1]/str[@name='id']='3'" ,
"/response/result/doc[1]/doc[2]/str[@name='id']='5'" };
"/response/result/doc[1]/doc[1]/str[@name='id']='5'" ,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This had to be changed since we initialise the for loop in ChildDocTransformer using the limit param, so the sort is docID descending now.
Thought it is important to point it out, and perhaps this should also be documented.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How could choosing the index based on the "limit" change the document ordering? My understanding is that child documents placed onto the parent via this transformer are in docID order, which is the same order they were in as provided to Solr.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect you are simply using the "sort" word inappropriately. "sort" is related to ordering, but we didn't change the order. We did change the "window" or "offset", if you will, into the docs to examine. Ordering/sort hasn't changed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes,
I meant the order in which the documents are counted.
My bad.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, the order hasn't changed ;-)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since we initialize the document using from the highest index to exhaust the limit if possible, it means some documents may be skipped. If we have 3 child docs: 1 , 2, 3, and the limit is set to "2", only doc 2 and 3 will be added if we initialise the index to the one that will exhaust the limit. Another way is simply to count the number of matching docs so far, and continue onto the next root doc if reached(return from the transform method).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I was not specific enough beforehand.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"If we have 3 child docs" .... -- the explanation in that sentence is perfectly clear, I think. The next sentence confused me, but whatever.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The next sentence is a proposal for a different way to limit the number of child documents.
Incase the current logic is sufficient, then we have no need to worry about it.

* @param lastDescendantId lowest docID of the root document's descendant
* @return the docID to loop and to not surpass limit of descendants to match specified by query
*/
private int calcLimitIndex(int segDocBaseId, int RootId, int lastDescendantId) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason the sort is docID descending, is because this method initialises the index to the highest docID to match exhaust the limit, if possible, or the last descendant document.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I commented elsewhere that I don't understand your claim about the ordering)

I'm now commenting here on this line to suggest simplifying (at least conceptually) this method. You're having it take a mixture of arguments with varying segment vs global bases, and I think it can be simplified. I think this method signature would be simpler as" calcDocIdToIterateFrom(firstChildDocId, rootDocId, limit) and the caller can more easily set docId to this in the loop init part and not conditionally look at limit.

@@ -81,8 +81,8 @@ private void testChildDoctransformerXML() {

String test3[] = new String[] {
"//*[@numFound='1']",
"/response/result/doc[1]/doc[1]/str[@name='id']='3'" ,
"/response/result/doc[1]/doc[2]/str[@name='id']='5'" };
"/response/result/doc[1]/doc[1]/str[@name='id']='5'" ,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How could choosing the index based on the "limit" change the document ordering? My understanding is that child documents placed onto the parent via this transformer are in docID order, which is the same order they were in as provided to Solr.

* @param lastDescendantId lowest docID of the root document's descendant
* @return the docID to loop and to not surpass limit of descendants to match specified by query
*/
private int calcLimitIndex(int segDocBaseId, int RootId, int lastDescendantId) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I commented elsewhere that I don't understand your claim about the ordering)

I'm now commenting here on this line to suggest simplifying (at least conceptually) this method. You're having it take a mixture of arguments with varying segment vs global bases, and I think it can be simplified. I think this method signature would be simpler as" calcDocIdToIterateFrom(firstChildDocId, rootDocId, limit) and the caller can more easily set docId to this in the loop init part and not conditionally look at limit.

* @return the docID to loop and to not surpass limit of descendants to match specified by query
*/
private int calcLimitIndex(int segDocBaseId, int RootId, int lastDescendantId) {
int i = segDocBaseId + RootId - 1; // the child document with the highest docID
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly suggest avoiding 'i' as a variable name unless it's a loop index

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed and noted :)

@Test
public void testParentFilterLimitJSON() throws Exception {
indexSampleData(10);
String[] tests = new String[] {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why define these tests up front? The vast majority of "assertJQ" or similar calls I've seen in Solr's tests put them inline at the method call, which I think makes the most sense since it's together with the query. And can the length be checked here? I think that's a key element of this test's purpose :-) BTW if you use the XML based assertions, you have a richer language to work with -- XPath. The json-like assertJQ is some home-grown thing in this project that supposedly is easier for people to understand due to the json-like nature (industry shifts from XML to JSON) but it's limited in capability. I'm not sure if assertJQ can assert an array length but you could investigate.

@moshebla
Copy link
Contributor Author

I have simplified the calcDocIdToIterateFrom method, and tests seem to pass :)

final int prevSegRootId = segDocBaseId + lastDescendantId;
assert prevSegRootId < i; // previous rootId should be smaller then current RootId
private int calcDocIdToIterateFrom(int lowestChildDocId, int RootDocId) {
assert lowestChildDocId < RootDocId; // first childDocId should be smaller then current RootId
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the root doc has no children, this assertion would fail, right? Hmm; should be tested we don't blow up/fail.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it would,
I'll add a test to ensure its behaviour.

@moshebla
Copy link
Contributor Author

I added extra logic to return as soon as we are able to determine that the root doc has no child documents.

final int segPrevRootId = segParentsBitSet.prevSetBit(segRootId - 1); // can return -1 and that's okay
final int segPrevRootId = rootDocId==0? -1: segParentsBitSet.prevSetBit(segRootId - 1); // can return -1 and that's okay

if(segPrevRootId == (rootDocId - 1)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you are comparing a segment local ID with a global ID which is incorrect. You should refer to segRootId. This is why I'm particular about using "seg" nomenclature in a body of code that deals with both segment and global IDs -- it makes it at least easier to identify such an error. It's difficult to get tests to detect this; we'd need to commit some docs up front to cause more segments to be created than many tests will do.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You altered line 90 in response to my comment. I'm referring to line 92 -- if(segPrevRootId == (rootDocId - 1)), where my comment is.

Interesting though.... line 90 is different from the line of code I committed to the feature branch. That line on the feature branch is:
final int segPrevRootId = segParentsBitSet.prevSetBit(segRootId - 1); // can return -1 and that's okay
Notice there is no conditional, but there is in your version in this PR. BitSet.prevSetBit will return -1 if given a 0 input (so says it's documentation). That's what I was trying to say in my comment on the line of code. Why did you change it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If segRootId is 0, segParentsBitSet.prevSetBit(-1) throws an assertion error, since the index has to be >= 0.
I will fix line 92

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ooooh, good catch.

Lets enhance the tests in this file a bit to help us give confidence that we're using docIDs correctly (and help avoid future enhancers/modifiers from introducing similar bugs). Here's what I propose: in the @BeforeClass, if random().nextBoolean(), add some nested docs -- using one of your existing nested document adding methods. And randomly do a commit() to flush the segment. Later in the test methods we need to add a filter query that will exclude those docs. One way to do this is to ensure these first docs have some field we can exclude. Another way might be knowing the maximum uniqueKey ID you can query by prior to the test starting, and then adding a filter query with a range saying the uniqueKey must be at least this value. Make sense?

Copy link
Contributor Author

@moshebla moshebla Aug 19, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes,
I have added this conditional to the tests.
Let's hope I did not misread your intentions :).

@@ -291,4 +302,15 @@ private static String generateDocHierarchy(int i) {
"}\n" +
"}";
}

private static String IndexWoChildDocs() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First, never start a method name with an uppercase letter (at least in Java). But secondly, I suggest inlining this to the point of use since you're only using it in one place. I know this is a stylistic point but I find it harder to read code that refers to a bunch of other things in other places that I have to go read -- it's disjointed; disrupts reading flow. Of course a good deal of that is normal (calling string.whatever) but here it's only used for this test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops how embarrassing, guess I was in a hurry :(.

}

@After
public void after() throws Exception {
clearIndex();
if (!useSegments) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh my suggestion of beforeClass didn't consider that you might have to do this partial deletion. To make this simpler, move the new document addition stuff in beforeClass into a before() (and make pertinent fields non-static), then you can revert this after() to as it was.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could keep this if you want (use BeforeClass but have delete query in after()). But if you do, it could be simplified: Don't conditionally call "clearIndex"; simply always do this delete. And there's a simpler overloaded version -- delQ(queryhere)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I could simply use delQ using the filter,
great :-).

@@ -264,7 +309,7 @@ private static Object cleanIndexableField(Object field) {
}

private static String grandChildDocTemplate(int id) {
int docNum = id / 8; // the index of docs sent to solr in the AddUpdateCommand. e.g. first doc is 0
int docNum = (id / sumOfDocsPerNestedDocument) % numberOfDocsPerNestedTest; // the index of docs sent to solr in the AddUpdateCommand. e.g. first doc is 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking kinda complicated now (same for fullNestedDocTemplate). Does it really matter how the type_s value is chosen exactly? I don't know; I confess to have glossed over this aspect of the test; I don't get the point. I wonder if whatever the test is trying to truly test here might be simplified to go about it in some other means.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the modulo is added to filter the docs in the other segments, and then calculate the i that was passed, when constructing the nested document. This then ensures the child transformer did not fail. If we can be satisfied by only testing the ids, which does not seem as bullet-proof to me, this could be removed, and only the ids will be tested.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried simplifying the calculation a bit, here's a link.

private static final int numberOfDocsPerNestedTest = 10;
private static boolean useSegments;
private static int randomDocTopId = 0;
private static String filterOtherSegments;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor point: rename filterOtherSegments to fqToExcludeNontestedDocs and add a comment explaining why we even have non-tested docs -- it's to perturb the Lucene segments a bit to ensure the transformer works with and without docs in other segments

@@ -40,22 +43,52 @@
private static final Iterator<String> ingredientsCycler = Iterables.cycle(ingredients).iterator();
private static final String[] names = {"Yaz", "Jazz", "Costa"};
private static final String[] fieldsToRemove = {"_nest_parent_", "_nest_path_", "_root_"};
private static final int sumOfDocsPerNestedDocument = 8;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand we need a filter query here that can be referenced by the test, but I'm a bit dubious on all these other ones. You may very well have written the test in such a way that they are necessary at the moment, but lets consider how to simplify.

@@ -20,6 +20,9 @@
<schema name="nested-docs" version="1.6">

<field name="id" type="string" indexed="true" stored="true" multiValued="false" required="true"/>
<field name="idInt" type="int" indexed="true" multiValued="false" docValues="true" stored="false" useDocValuesAsStored="false" />
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor: rename to id_i to follow typical naming conventions

Copy link
Contributor

@dsmiley dsmiley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is the last back'n forth; just about there now.

private static final String[] fieldsToRemove = {"_nest_parent_", "_nest_path_", "_root_"};
private static final int sumOfDocsPerNestedDocument = 8;
private static final int numberOfDocsPerNestedTest = 10;
private static int randomDocTopId = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest rename to "firstTestedDocId". And rename "counter" to "idCounter".

private static final int sumOfDocsPerNestedDocument = 8;
private static final int numberOfDocsPerNestedTest = 10;
private static int randomDocTopId = 0;
private static String fqToExcludeNoneTestedDocs; // filter documents that were created for random segments to ensure the transformer works with multiple segments.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None -> Non

}
}
assertU(commit());
randomDocTopId = counter.get();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this line should be at the end of this method, executed always. I can see it works where it is now, executed conditionally, but I think it'd be clearer if not done in a condition.

// MultiMap is the direct child document's key(of the parent document)
Map<String, Multimap<String, SolrDocument>> pendingParentPathsToChildren = new HashMap<>();

IndexSchema schema = searcher.getSchema();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Master has changes recently committed related to document fetching. It'll simplify this nicely here. Can you please sync up?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do

@moshebla
Copy link
Contributor Author

Rebased on master to include DocFetcher improvements,
which helped remove some boilerplate code.

@moshebla
Copy link
Contributor Author

Fixed SOLRJ tests which failed limit assertion for ChildDocTransformer

@@ -92,6 +97,18 @@ private void testChildDoctransformerXML() {
assertQ(req("q", "*:*", "fq", "subject:\"parentDocument\" ",
"fl", "subject,[child parentFilter=\"subject:parentDocument\" childFilter=\"title:foo\"]"), test2);

try(SolrQueryRequest req = req("q", "*:*", "fq", "subject:\"parentDocument\" ",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please include a comment explaining what it is you're testing here (i.e. what is the point of this particular test); it's non-obvious to me.
This test tests manually... but it could be done more concisely using the XPath style assertions done elsewhere in this file and most tests. For example if you want to test the number of child documents, it'd be something like this: "count(/response/result/doc[1]/doc)=2" and include a numFound check.

indexSampleData(numberOfDocsPerNestedTest);
assertJQ(req("q", "test_s:testing",
"sort", "id asc",
"fl", "*,[child childFilter='lonely/lonelyGrandChild/test2_s:secondTest' parentFilter='_nest_path_:\"lonely/\"']",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha; I see you're requiring that a custom parentFilter be present in order to use the childDocFilter on non-root docs. I suppose that's fine; it something that could be enhanced in the future to avoid that inconvenience & limitation(*). Since this is now required, the transformer should throw an error to the user if the doc to be transformed isn't in that filter. For example: "The document to be transformed must be matched by the parentFilter of the child transformer" with BAD_REQUEST flag as it's user-error. And add a test for this.

(*) An example of it being a limitation is this: Say the child docs are all a "comment" in nature; and thus are recursive. The top query "q" might match certain comments of interest. And we want all children of those comments returned hierarchically. The parentFilter would end up having to match all comments, but that would prevent returning child comments and thus blocking the whole idea altogether :-(

To "fix" this limitation, we'd not constrain transformed docs to those in the parentFilter, and we wouldn't insist on any special parentFilter. In the loop that builds the hierarchy, when fetching the path, we'd need to skip over the current doc if it doesn't descend from that of the doc to be transformed. Seems pretty straight-forward.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps I could try and remove the parentFilter requirement all together for hierarchy based queries?

// get the path. (note will default to ANON_CHILD_KEY if schema is not nested or empty string if blank)
String fullDocPath = getPathByDocId(docId - segBaseId, segPathDocValues);

if(isNestedSchema && !fullDocPath.contains(transformedDocPath)) {
Copy link
Contributor Author

@moshebla moshebla Aug 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps a better way to do this would be building a new Filter for every transformed doc e.g. _nest_path_:transformedDocPath?
I am not quite sure of the performance overhead such technique would impose.
WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps a better way to do this ...

I think it would be slow and add complexity if we did that.

string.contains(...) doesn't seem right here; shouldn't it be startsWith or endsWith? Will probably need to special-case empty transformedDocPath (rootDocPath).

I think it's possible to not need "isNestedSchema", though it's not a big deal.

// get the path. (note will default to ANON_CHILD_KEY if schema is not nested or empty string if blank)
String fullDocPath = getPathByDocId(docId - segBaseId, segPathDocValues);

if(isNestedSchema && !fullDocPath.contains(transformedDocPath)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps a better way to do this ...

I think it would be slow and add complexity if we did that.

string.contains(...) doesn't seem right here; shouldn't it be startsWith or endsWith? Will probably need to special-case empty transformedDocPath (rootDocPath).

I think it's possible to not need "isNestedSchema", though it's not a big deal.

String fullDocPath = getPathByDocId(docId - segBaseId, segPathDocValues);

if(isNestedSchema && !fullDocPath.contains(transformedDocPath)) {
// is not a descendant of the transformed doc, return fast.
return;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

woah; shouldn't this be "continue"? We should have a test that would fail on this bug. All it would take would be an additional child doc that is not underneath the input root/transformed doc but follows it (as provided on input). Some of the first docs we iterate over here might not descend from rootDocId

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, you're right.
I'll investigate further to see why a test did not fail because of this.

Copy link
Contributor Author

@moshebla moshebla Aug 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added another query to TestChildDocumentHierarchy#testNonRootChildren.
The addition of the query caused the test to fail before the return was changed to continue(previous commit), and pass using the latest commit.

@@ -99,6 +96,9 @@ public void transform(SolrDocument rootDoc, int rootDocId) {

// we'll need this soon...
final SortedDocValues segPathDocValues = DocValues.getSorted(leafReaderContext.reader(), NEST_PATH_FIELD_NAME);
// passing a different SortedDocValues obj since the child documents which come after are of smaller docIDs,
// and the iterator can not be reversed.
final String transformedDocPath = getPathByDocId(segRootId, DocValues.getSorted(leafReaderContext.reader(), NEST_PATH_FIELD_NAME));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you call this rootDocPath since we refer to this doc as the "root doc" elsewhere here? I can see why you chose this name. You could add a comment that the "root doc" is the input doc we are adding information to, and is usually but not necessarily the root of the block of documents (i.e. the root doc may itself be a child doc of another doc).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure thing.
Done :-)

if(matches == limit) {
continue;
}
++matches;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think matches should be incremented if it's in childDocSet (includes childDocSet being null). Wether it's an ancestor or not doesn't matter I think. You could pull out a new variable isInChildDocSet. Or I suppose simply consider all a match; what I see what you just did as I write this; that's fine too.

@@ -124,10 +124,11 @@ public void testParentFilterLimitJSON() throws Exception {

assertJQ(req("q", "type_s:donut",
"sort", "id asc",
"fl", "id, type_s, toppings, _nest_path_, [child limit=1]",
"fl", "id, type_s, lonely, lonelyGrandChild, test_s, test2_s, _nest_path_, [child limit=1]",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To my point I wrote in JIRA: It's sad that when I see this I have no idea if it's right/wrong without having to go look at indexSampleData then think about it. No? (this isn't a critique of you in particular; lots of tests including some I've written look like the current tests here). imagine one doc with some nested docs, all of which only have their ID. Since they only have their ID, it's not a lot of literal text in JSON. The BeforeClass unmatched docs cold use negative IDs to easily know who's who. Any way if you would rather leave this as a "TODO" for another day then I understand.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving this as a TODO for another day sounds like a decent option.

@moshebla
Copy link
Contributor Author

was commited in commit 5a0e7a6

@moshebla moshebla closed this Aug 29, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants