pyspark.pandas.DataFrame.all#

DataFrame.all(axis=0, bool_only=None, skipna=True)[source]#

Return whether all elements are True.

Returns True unless there is at least one element within a series that is False or equivalent (e.g. zero or empty)

Parameters
axis{0 or ‘index’}, default 0

Indicate which axis or axes should be reduced.

  • 0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.

bool_onlybool, default None

Include only boolean columns. If None, will attempt to use everything, then use only boolean data.

skipnaboolean, default True

Exclude NA values, such as None or numpy.NaN. If an entire row/column is NA values and skipna is True, then the result will be True, as for an empty row/column. If skipna is False, numpy.NaNs are treated as True because these are not equal to zero, Nones are treated as False.

Returns
Series

Examples

Create a dataframe from a dictionary.

>>> df = ps.DataFrame({
...    'col1': [True, True, True],
...    'col2': [True, False, False],
...    'col3': [0, 0, 0],
...    'col4': [1, 2, 3],
...    'col5': [True, True, None],
...    'col6': [True, False, None]},
...    columns=['col1', 'col2', 'col3', 'col4', 'col5', 'col6'])

Default behavior checks if column-wise values all return True.

>>> df.all()
col1     True
col2    False
col3    False
col4     True
col5     True
col6    False
dtype: bool

Include NA values when set skipna=False.

>>> df[['col5', 'col6']].all(skipna=False)
col5    False
col6    False
dtype: bool

Include only boolean columns when set bool_only=True.

>>> df.all(bool_only=True)
col1     True
col2    False
dtype: bool