Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit 370fe90

Browse files
authoredMar 23, 2024
docs: add code samples for metrics.{recall_score, precision_score, f11_score} (#502)
…_score} Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly: - [ ] Make sure to open an issue as a [bug/issue](https://togithub.com/googleapis/python-bigquery-dataframes/issues/new/choose) before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea - [ ] Ensure the tests and linter pass - [ ] Code coverage does not decrease (if any source code was changed) - [ ] Appropriate docs were updated (if necessary) Fixes #<issue_number_goes_here> 🦕
1 parent c4beafd commit 370fe90

File tree

1 file changed

+48
-0
lines changed

1 file changed

+48
-0
lines changed
 

‎third_party/bigframes_vendored/sklearn/metrics/_classification.py

+48
Original file line numberDiff line numberDiff line change
@@ -128,6 +128,22 @@ def recall_score(
128128
129129
The best value is 1 and the worst value is 0.
130130
131+
**Examples:**
132+
133+
>>> import bigframes.pandas as bpd
134+
>>> import bigframes.ml.metrics
135+
>>> bpd.options.display.progress_bar = None
136+
137+
>>> y_true = bpd.DataFrame([0, 1, 2, 0, 1, 2])
138+
>>> y_pred = bpd.DataFrame([0, 2, 1, 0, 0, 1])
139+
>>> recall_score = bigframes.ml.metrics.recall_score(y_true, y_pred, average=None)
140+
>>> recall_score
141+
0 1
142+
1 0
143+
2 0
144+
dtype: int64
145+
146+
131147
Args:
132148
y_true (Series or DataFrame of shape (n_samples,)):
133149
Ground truth (correct) target values.
@@ -137,6 +153,7 @@ def recall_score(
137153
default='binary'):
138154
This parameter is required for multiclass/multilabel targets.
139155
Possible values are 'None', 'micro', 'macro', 'samples', 'weighted', 'binary'.
156+
Only average=None is supported.
140157
141158
Returns:
142159
float (if average is not None) or Series of float of shape n_unique_labels,): Recall
@@ -160,6 +177,21 @@ def precision_score(
160177
161178
The best value is 1 and the worst value is 0.
162179
180+
**Examples:**
181+
182+
>>> import bigframes.pandas as bpd
183+
>>> import bigframes.ml.metrics
184+
>>> bpd.options.display.progress_bar = None
185+
186+
>>> y_true = bpd.DataFrame([0, 1, 2, 0, 1, 2])
187+
>>> y_pred = bpd.DataFrame([0, 2, 1, 0, 0, 1])
188+
>>> precision_score = bigframes.ml.metrics.precision_score(y_true, y_pred, average=None)
189+
>>> precision_score
190+
0 0.666667
191+
1 0.000000
192+
2 0.000000
193+
dtype: float64
194+
163195
Args:
164196
y_true: Series or DataFrame of shape (n_samples,)
165197
Ground truth (correct) target values.
@@ -169,6 +201,7 @@ def precision_score(
169201
default='binary'
170202
This parameter is required for multiclass/multilabel targets.
171203
Possible values are 'None', 'micro', 'macro', 'samples', 'weighted', 'binary'.
204+
Only average=None is supported.
172205
173206
Returns:
174207
precision: float (if average is not None) or Series of float of shape \
@@ -195,6 +228,21 @@ def f1_score(
195228
the F1 score of each class with weighting depending on the ``average``
196229
parameter.
197230
231+
**Examples:**
232+
233+
>>> import bigframes.pandas as bpd
234+
>>> import bigframes.ml.metrics
235+
>>> bpd.options.display.progress_bar = None
236+
237+
>>> y_true = bpd.DataFrame([0, 1, 2, 0, 1, 2])
238+
>>> y_pred = bpd.DataFrame([0, 2, 1, 0, 0, 1])
239+
>>> f1_score = bigframes.ml.metrics.f1_score(y_true, y_pred, average=None)
240+
>>> f1_score
241+
0 0.8
242+
1 0.0
243+
2 0.0
244+
dtype: float64
245+
198246
Args:
199247
y_true: Series or DataFrame of shape (n_samples,)
200248
Ground truth (correct) target values.

0 commit comments

Comments
 (0)
Failed to load comments.