请配置monthrankfixlogback.xml配置.是什么意思

Cookbook — pandas 0.22.0 documentation
This is a repository for short and sweet examples and links for useful pandas recipes.
We encourage users to add to this documentation.
Adding interesting links and/or inline examples to this section is a great First Pull Request.
Simplified, condensed, new-user friendly, in-line examples have been inserted where possible to
augment the Stack-Overflow and GitHub links.
Many of the links contain expanded information,
above what the in-line examples offer.
Pandas (pd) and Numpy (np) are the only two abbreviated imported modules. The rest are kept
explicitly imported for newer users.
These examples are written for python 3.4.
Minor tweaks might be necessary for earlier python
These are some neat pandas idioms
In [1]: df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
if-then...
An if-then on one column
In [2]: df.loc[df.AAA &= 5,'BBB'] = -1; df
An if-then with assignment to 2 columns:
In [3]: df.loc[df.AAA &= 5,['BBB','CCC']] = 555; df
Add another line with different logic, to do the -else
In [4]: df.loc[df.AAA & 5,['BBB','CCC']] = 2000; df
Or use pandas where after you’ve set up a mask
In [5]: df_mask = pd.DataFrame({'AAA' : [True] * 4, 'BBB' : [False] * 4,'CCC' : [True,False] * 2})
In [6]: df.where(df_mask,-1000)
In [7]: df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
In [8]: df['logic'] = np.where(df['AAA'] & 5,'high','low'); df
In [9]: df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
In [10]: dflow = df[df.AAA &= 5]; dflow
In [11]: dfhigh = df[df.AAA & 5]; dfhigh
Building Criteria
In [12]: df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
...and (without assignment returns a Series)
In [13]: newseries = df.loc[(df['BBB'] & 25) & (df['CCC'] &= -40), 'AAA']; newseries
Name: AAA, dtype: int64
...or (without assignment returns a Series)
In [14]: newseries = df.loc[(df['BBB'] & 25) | (df['CCC'] &= -40), 'AAA']; newseries;
...or (with assignment modifies the DataFrame.)
In [15]: df.loc[(df['BBB'] & 25) | (df['CCC'] &= 75), 'AAA'] = 0.1; df
In [16]: df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
In [17]: aValue = 43.0
In [18]: df.loc[(df.CCC-aValue).abs().argsort()]
In [19]: df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
In [20]: Crit1 = df.AAA &= 5.5
In [21]: Crit2 = df.BBB == 10.0
In [22]: Crit3 = df.CCC & -40.0
One could hard code:
In [23]: AllCrit = Crit1 & Crit2 & Crit3
...Or it can be done with a list of dynamically built criteria
In [24]: CritList = [Crit1,Crit2,Crit3]
In [25]: AllCrit = functools.reduce(lambda x,y: x & y, CritList)
In [26]: df[AllCrit]
DataFrames
In [27]: df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
In [28]: df[(df.AAA &= 6) & (df.index.isin([0,2,4]))]
In [29]: data = {'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}
In [30]: df = pd.DataFrame(data=data,index=['foo','bar','boo','kar']); df
There are 2 explicit slicing methods, with a third general case
Positional-oriented (Python slicing style : exclusive of end)
Label-oriented (Non-Python slicing style : inclusive of end)
General (Either slicing style : depends on if the slice contains labels or positions)
In [31]: df.loc['bar':'kar'] #Label
In [32]: df.iloc[0:3]
In [33]: df.loc['bar':'kar']
Ambiguity arises when an index consists of integers with a non-zero start or non-unit increment.
In [34]: df2 = pd.DataFrame(data=data,index=[1,2,3,4]); #Note index starts at 1.
In [35]: df2.iloc[1:3] #Position-oriented
In [36]: df2.loc[1:3] #Label-oriented
In [37]: df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40], 'CCC' : [100,50,-30,-50]}); df
In [38]: df[~((df.AAA &= 6) & (df.index.isin([0,2,4])))]
In [39]: rng = pd.date_range('1/1/2013',periods=100,freq='D')
In [40]: data = np.random.randn(100, 4)
In [41]: cols = ['A','B','C','D']
In [42]: df1, df2, df3 = pd.DataFrame(data, rng, cols), pd.DataFrame(data, rng, cols), pd.DataFrame(data, rng, cols)
In [43]: pf = pd.Panel({'df1':df1,'df2':df2,'df3':df3});pf
&class 'pandas.core.panel.Panel'&
Dimensions: 3 (items) x 100 (major_axis) x 4 (minor_axis)
Items axis: df1 to df3
Major_axis axis:
00:00:00 to
Minor_axis axis: A to D
In [44]: pf.loc[:,:,'F'] = pd.DataFrame(data, rng, cols);pf
&class 'pandas.core.panel.Panel'&
Dimensions: 3 (items) x 100 (major_axis) x 5 (minor_axis)
Items axis: df1 to df3
Major_axis axis:
00:00:00 to
Minor_axis axis: A to F
New Columns
In [45]: df = pd.DataFrame(
{'AAA' : [1,2,1,3], 'BBB' : [1,1,2,2], 'CCC' : [2,1,3,1]}); df
In [46]: source_cols = df.columns # or some subset would work too.
In [47]: new_cols = [str(x) + &_cat& for x in source_cols]
In [48]: categories = {1 : 'Alpha', 2 : 'Beta', 3 : 'Charlie' }
In [49]: df[new_cols] = df[source_cols].applymap(categories.get);df
AAA_cat BBB_cat
In [50]: df = pd.DataFrame(
{'AAA' : [1,1,1,2,2,2,3,3], 'BBB' : [2,1,3,4,5,1,2,3]}); df
Method 1 : idxmin() to get the index of the mins
In [51]: df.loc[df.groupby(&AAA&)[&BBB&].idxmin()]
Method 2 : sort then take first of each
In [52]: df.sort_values(by=&BBB&).groupby(&AAA&, as_index=False).first()
Notice the same results, with the exception of the index.
MultiIndexing
In [53]: df = pd.DataFrame({'row' : [0,1,2],
'One_X' : [1.1,1.1,1.1],
'One_Y' : [1.2,1.2,1.2],
'Two_X' : [1.11,1.11,1.11],
'Two_Y' : [1.22,1.22,1.22]}); df
# As Labelled Index
In [54]: df = df.set_index('row');df
# With Hierarchical Columns
In [55]: df.columns = pd.MultiIndex.from_tuples([tuple(c.split('_')) for c in df.columns]);df
# Now stack & Reset
In [56]: df = df.stack(0).reset_index(1);df
# And fix the labels (Notice the label 'level_1' got added automatically)
In [57]: df.columns = ['Sample','All_X','All_Y'];df
Arithmetic
In [58]: cols = pd.MultiIndex.from_tuples([ (x,y) for x in ['A','B','C'] for y in ['O','I']])
In [59]: df = pd.DataFrame(np.random.randn(2,6),index=['n','m'],columns=cols); df
1......399555
m -1......739776
In [60]: df = df.div(df['C'],level=1); df
m -2.....0
In [61]: coords = [('AA','one'),('AA','six'),('BB','one'),('BB','two'),('BB','six')]
In [62]: index = pd.MultiIndex.from_tuples(coords)
In [63]: df = pd.DataFrame([11,22,33,44,55],index,['MyData']); df
To take the cross section of the 1st level and 1st axis the index:
In [64]: df.xs('BB',level=0,axis=0)
#Note : level and axis are optional, and default to zero
...and now the 2nd level of the 1st axis.
In [65]: df.xs('six',level=1,axis=0)
In [66]: index = list(itertools.product(['Ada','Quinn','Violet'],['Comp','Math','Sci']))
In [67]: headr = list(itertools.product(['Exams','Labs'],['I','II']))
In [68]: indx = pd.MultiIndex.from_tuples(index,names=['Student','Course'])
In [69]: cols = pd.MultiIndex.from_tuples(headr) #Notice these are un-named
In [70]: data = [[70+x+y+(x*y)%3 for x in range(4)] for y in range(9)]
In [71]: df = pd.DataFrame(data,indx,cols); df
Student Course
In [72]: All = slice(None)
In [73]: df.loc['Violet']
In [74]: df.loc[(All,'Math'),All]
Student Course
In [75]: df.loc[(slice('Ada','Quinn'),'Math'),All]
Student Course
In [76]: df.loc[(All,'Math'),('Exams')]
Student Course
In [77]: df.loc[(All,'Math'),(All,'II')]
Exams Labs
Student Course
In [78]: df.sort_values(by=('Labs', 'II'), ascending=False)
Student Course
Missing Data
Fill forward a reversed timeseries
In [79]: df = pd.DataFrame(np.random.randn(6,1), index=pd.date_range('', periods=6, freq='B'), columns=list('A'))
In [80]: df.loc[df.index[3], 'A'] = np.nan
In [81]: df
In [82]: df.reindex(df.index[::-1]).ffill()
Unlike agg, apply’s callable is passed a sub-DataFrame which gives you access to all the columns
In [83]: df = pd.DataFrame({'animal': 'cat dog cat fish dog cat cat'.split(),
'size': list('SSMMMLL'),
'weight': [8, 10, 11, 1, 20, 12, 12],
'adult' : [False] * 5 + [True] * 2}); df
adult animal size
#List the size of the animals with the highest weight.
In [84]: df.groupby('animal').apply(lambda subf: subf['size'][subf['weight'].idxmax()])
dtype: object
In [85]: gb = df.groupby(['animal'])
In [86]: gb.get_group('cat')
adult animal size
In [87]: def GrowUp(x):
avg_weight =
sum(x[x['size'] == 'S'].weight * 1.5)
avg_weight += sum(x[x['size'] == 'M'].weight * 1.25)
avg_weight += sum(x[x['size'] == 'L'].weight)
avg_weight /= len(x)
return pd.Series(['L',avg_weight,True], index=['size', 'weight', 'adult'])
In [88]: expected_df = gb.apply(GrowUp)
In [89]: expected_df
In [90]: S = pd.Series([i / 100.0 for i in range(1,11)])
In [91]: def CumRet(x,y):
return x * (1 + y)
In [92]: def Red(x):
return functools.reduce(CumRet,x,1.0)
In [93]: S.expanding().apply(Red)
dtype: float64
In [94]: df = pd.DataFrame({'A' : [1, 1, 2, 2], 'B' : [1, -1, 1, 2]})
In [95]: gb = df.groupby('A')
In [96]: def replace(g):
mask = g & 0
g.loc[mask] = g[~mask].mean()
In [97]: gb.transform(replace)
In [98]: df = pd.DataFrame({'code': ['foo', 'bar', 'baz'] * 2,
'data': [0.16, -0.21, 0.33, 0.45, -0.59, 0.62],
'flag': [False, True] * 3})
In [99]: code_groups = df.groupby('code')
In [100]: agg_n_sort_order = code_groups[['data']].transform(sum).sort_values(by='data')
In [101]: sorted_df = df.loc[agg_n_sort_order.index]
In [102]: sorted_df
In [103]: rng = pd.date_range(start=&&,periods=10,freq='2min')
In [104]: ts = pd.Series(data = list(range(10)), index = rng)
In [105]: def MyCust(x):
if len(x) & 2:
return x[1] * 1.234
return pd.NaT
In [106]: mhc = {'Mean' : np.mean, 'Max' : np.max, 'Custom' : MyCust}
In [107]: ts.resample(&5min&).apply(mhc)
dtype: object
In [108]: ts
Freq: 2T, dtype: int64
In [109]: df = pd.DataFrame({'Color': 'Red Red Red Blue'.split(),
'Value': [100, 150, 50, 50]}); df
In [110]: df['Counts'] = df.groupby(['Color']).transform(len)
In [111]: df
In [112]: df = pd.DataFrame(
{u'line_race': [10, 10, 8, 10, 10, 8],
u'beyer': [99, 102, 103, 103, 88, 100]},
index=[u'Last Gunfighter', u'Last Gunfighter', u'Last Gunfighter',
u'Paynter', u'Paynter', u'Paynter']); df
Last Gunfighter
Last Gunfighter
Last Gunfighter
In [113]: df['beyer_shifted'] = df.groupby(level=0)['beyer'].shift(1)
In [114]: df
beyer_shifted
Last Gunfighter
Last Gunfighter
Last Gunfighter
In [115]: df = pd.DataFrame({'host':['other','other','that','this','this'],
'service':['mail','web','mail','mail','web'],
'no':[1, 2, 1, 2, 1]}).set_index(['host', 'service'])
In [116]: mask = df.groupby(level=0).agg('idxmax')
In [117]: df_count = df.loc[mask['no']].reset_index()
In [118]: df_count
host service
In [119]: df = pd.DataFrame([0, 1, 0, 1, 1, 1, 0, 1, 1], columns=['A'])
In [120]: df.A.groupby((df.A != df.A.shift()).cumsum()).groups
{1: Int64Index([0], dtype='int64'),
2: Int64Index([1], dtype='int64'),
3: Int64Index([2], dtype='int64'),
4: Int64Index([3, 4, 5], dtype='int64'),
5: Int64Index([6], dtype='int64'),
6: Int64Index([7, 8], dtype='int64')}
In [121]: df.A.groupby((df.A != df.A.shift()).cumsum()).cumsum()
Name: A, dtype: int64
Expanding Data
Create a list of dataframes, split using a delineation based on logic included in rows.
In [122]: df = pd.DataFrame(data={'Case' : ['A','A','A','B','A','A','B','A','A'],
'Data' : np.random.randn(9)})
In [123]: dfs = list(zip(*df.groupby((1*(df['Case']=='B')).cumsum().rolling(window=3,min_periods=1).median())))[-1]
In [124]: dfs[0]
A -0.439461
A -0.741343
B -0.079673
In [125]: dfs[1]
A -0.922875
B -0.917368
In [126]: dfs[2]
A -1.624062
A -0.758514
In [127]: df = pd.DataFrame(data={'Province' : ['ON','QC','BC','AL','AL','MN','ON'],
'City' : ['Toronto','Montreal','Vancouver','Calgary','Edmonton','Winnipeg','Windsor'],
'Sales' : [13,6,16,8,4,3,1]})
In [128]: table = pd.pivot_table(df,values=['Sales'],index=['Province'],columns=['City'],aggfunc=np.sum,margins=True)
In [129]: table.stack('City')
Province City
[20 rows x 1 columns]
In [130]: grades = [48,99,75,80,42,80,72,68,36,78]
In [131]: df = pd.DataFrame( {'ID': [&x%d& % r for r in range(10)],
'Gender' : ['F', 'M', 'F', 'M', 'F', 'M', 'F', 'M', 'M', 'M'],
'ExamYear': ['2007','2007','2007','2008','2008','2008','2008','2009','2009','2009'],
'Class': ['algebra', 'stats', 'bio', 'algebra', 'algebra', 'stats', 'stats', 'algebra', 'bio', 'bio'],
'Participated': ['yes','yes','yes','yes','no','yes','yes','yes','yes','yes'],
'Passed': ['yes' if x & 50 else 'no' for x in grades],
'Employed': [True,True,True,False,False,False,False,True,True,False],
'Grade': grades})
In [132]: df.groupby('ExamYear').agg({'Participated': lambda x: x.value_counts()['yes'],
'Passed': lambda x: sum(x == 'yes'),
'Employed' : lambda x : sum(x),
'Grade' : lambda x : sum(x) / len(x)})
Participated
To create year and month crosstabulation:
In [133]: df = pd.DataFrame({'value': np.random.randn(36)},
index=pd.date_range('', freq='M', periods=36))
In [134]: pd.pivot_table(df, index=df.index.month, columns=df.index.year,
values='value', aggfunc='sum')
-0...516870
-0...343125
-1...137827
-1...452429
0...483103
0...061495
0...240767
1...782413
0...628462
10 -0...880627
0...777575
0...779367
In [135]: df = pd.DataFrame(data={'A' : [[2,4,8,16],[100,200],[10,20,30]], 'B' : [['a','b','c'],['jj','kk'],['ccc']]},index=['I','II','III'])
In [136]: def SeriesFromSubList(aList):
return pd.Series(aList)
In [137]: df_orgz = pd.concat(dict([ (ind,row.apply(SeriesFromSubList)) for ind,row in df.iterrows() ]))
Rolling Apply to multiple columns where function calculates a Series before a Scalar from the Series is returned
In [138]: df = pd.DataFrame(data=np.random.randn(2000,2)/10000,
index=pd.date_range('',periods=2000),
columns=['A','B']); df
-0..000207
-0..000165
-0..000156
-0..000034
-0..000012
-0..000027
-0..000009
[2000 rows x 2 columns]
In [139]: def gm(aDF,Const):
v = ((((aDF.A+aDF.B)+1).cumprod())-1)*Const
return (aDF.index[0],v.iloc[-1])
In [140]: S = pd.Series(dict([ gm(df.iloc[i:min(i+51,len(df)-1)],5) for i in range(len(df)-50) ])); S
Length: 1950, dtype: float64
Rolling Apply to multiple columns where function returns a Scalar (Volume Weighted Average Price)
In [141]: rng = pd.date_range(start = '',periods = 100)
In [142]: df = pd.DataFrame({'Open' : np.random.randn(len(rng)),
'Close' : np.random.randn(len(rng)),
'Volume' : np.random.randint(100,2000,len(rng))}, index=rng); df
-0..011174
-0..046922
-1..752902
-0..648401
-2..120378
-0..781318
-0..753493
[100 rows x 3 columns]
In [143]: def vwap(bars): return ((bars.Close*bars.Volume).sum()/bars.Volume.sum())
In [144]: window = 5
In [145]: s = pd.concat([ (pd.Series(vwap(df.iloc[i:i+window]), index=[df.index[i+window]])) for i in range(len(df)-window) ]);
In [146]: s.round(2)
Length: 95, dtype: float64
Resampling
In [149]: rng = pd.date_range('', periods=6)
In [150]: df1 = pd.DataFrame(np.random.randn(6, 3), index=rng, columns=['A', 'B', 'C'])
In [151]: df2 = df1.copy()
Depending on df construction, ignore_index may be needed
In [152]: df = df1.append(df2,ignore_index=True); df
-0...212846
1...275732
-1...013672
1...521517
-0...395768
-0...413751
-0...212846
1...275732
-1...013672
1...521517
10 -0...395768
11 -0...413751
In [153]: df = pd.DataFrame(data={'Area' : ['A'] * 5 + ['C'] * 2,
'Bins' : [110] * 2 + [160] * 3 + [40] * 2,
'Test_0' : [0, 1, 0, 1, 2, 0, 1],
'Data' : np.random.randn(7)});df
110 -0.378914
110 -1.032527
160 -1.402816
160 -0.091438
In [154]: df['Test_1'] = df['Test_0'] - 1
In [155]: pd.merge(df, df, left_on=['Bins', 'Area','Test_0'], right_on=['Bins', 'Area','Test_1'],suffixes=('_L','_R'))
110 -0.378914
-1 -1.032527
160 -1.402816
0 -0.091438
In [156]: df = pd.DataFrame(
{u'stratifying_var': np.random.uniform(0, 100, 20),
u'price': np.random.normal(100, 5, 20)})
In [157]: df[u'quartiles'] = pd.qcut(
df[u'stratifying_var'],
labels=[u'0-25%', u'25-50%', u'50-75%', u'75-100%'])
In [158]: df.boxplot(column=u'price', by=u'quartiles')
Out[158]: &matplotlib.axes._subplots.AxesSubplot at 0x&
Data In/Out
Reading a file that is compressed but not by gzip/bz2 (the native compressed formats which read_csv understands).
This example shows a WinZipped file, but is a general application of opening the file within a context manager and
using that handle to read.
Reading multiple files to create a single DataFrame
The best way to combine multiple files into a single DataFrame is to read the individual frames one by one, put all
of the individual frames into a list, and then combine the frames in the list using pd.concat():
In [159]: for i in range(3):
data = pd.DataFrame(np.random.randn(10, 4))
data.to_csv('file_{}.csv'.format(i))
In [160]: files = ['file_0.csv', 'file_1.csv', 'file_2.csv']
In [161]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)
You can use the same approach to read all files matching a pattern.
Here is an example using glob:
In [162]: import glob
In [163]: files = glob.glob('file_*.csv')
In [164]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)
Finally, this strategy will work with the other pd.read_*(...) functions described in the .
Option 1: pass rows explicitly to skiprows
In [166]: pd.read_csv(StringIO(data), sep=';', skiprows=[11,12],
index_col=0, parse_dates=True, header=10)
Option 2: read column names and then data
In [167]: pd.read_csv(StringIO(data), sep=';', header=10, nrows=10).columns
Out[167]: Index(['date', 'Param1', 'Param2', 'Param4', 'Param5'], dtype='object')
In [168]: columns = pd.read_csv(StringIO(data), sep=';', header=10, nrows=10).columns
In [169]: pd.read_csv(StringIO(data), sep=';', index_col=0,
header=12, parse_dates=True, names=columns)
De-duplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data from
csv file and creating a store by chunks, with date parsing as well.
Storing Attributes to a group node
In [170]: df = pd.DataFrame(np.random.randn(8,3))
In [171]: store = pd.HDFStore('test.h5')
In [172]: store.put('df',df)
# you can store an arbitrary python object via pickle
In [173]: store.get_storer('df').attrs.my_attribute = dict(A = 10)
In [174]: store.get_storer('df').attrs.my_attribute
Out[174]: {'A': 10}
Binary Files
pandas readily accepts numpy record arrays, if you need to read in a binary
file consisting of an array of C structs. For example, given this C program
in a file called main.c compiled with gcc main.c -std=gnu99 on a
64-bit machine,
#include &stdio.h&
#include &stdint.h&
typedef struct _Data
int32_t count;
double avg;
float scale;
int main(int argc, const char *argv[])
size_t n = 10;
Data d[n];
for (int i = 0; i & n; ++i)
d[i].count = i;
d[i].avg = i + 1.0;
d[i].scale = (float) i + 2.0f;
FILE *file = fopen(&binary.dat&, &wb&);
fwrite(&d, sizeof(Data), n, file);
fclose(file);
the following Python code will read the binary file 'binary.dat' into a
pandas DataFrame, where each element of the struct corresponds to a column
in the frame:
names = 'count', 'avg', 'scale'
# note that the offsets are larger than the size of the type because of
# struct padding
offsets = 0, 8, 16
formats = 'i4', 'f8', 'f4'
dt = np.dtype({'names': names, 'offsets': offsets, 'formats': formats},
align=True)
df = pd.DataFrame(np.fromfile('binary.dat', dt))
The offsets of the structure elements may be different depending on the
architecture of the machine on which the file was created. Using a raw
binary file format like this for general data storage is not recommended, as
it is not cross platform. We recommended either HDF5 or msgpack, both of
which are supported by pandas’ IO facilities.
Computation
In [182]: deltas = pd.Series([ datetime.timedelta(days=i) for i in range(3) ])
In [183]: df = pd.DataFrame(dict(A = s, B = deltas)); df
In [184]: df['New Dates'] = df['A'] + df['B'];
In [185]: df['Delta'] = df['A'] - df['New Dates']; df
In [186]: df.dtypes
datetime64[ns]
timedelta64[ns]
datetime64[ns]
timedelta64[ns]
dtype: object
Values can be set to NaT using np.nan, similar to datetime
In [187]: y = s - s.shift(); y
dtype: timedelta64[ns]
In [188]: y[1] = np.nan; y
dtype: timedelta64[ns]
Aliasing Axis Names
To globally provide aliases for axis names, one can define these 2 functions:
In [189]: def set_axis_alias(cls, axis, alias):
if axis not in cls._AXIS_NUMBERS:
raise Exception(&invalid axis [%s] for alias [%s]& % (axis, alias))
cls._AXIS_ALIASES[alias] = axis
In [190]: def clear_axis_alias(cls, axis, alias):
if axis not in cls._AXIS_NUMBERS:
raise Exception(&invalid axis [%s] for alias [%s]& % (axis, alias))
cls._AXIS_ALIASES.pop(alias,None)
In [191]: set_axis_alias(pd.DataFrame,'columns', 'myaxis2')
In [192]: df2 = pd.DataFrame(np.random.randn(3,2),columns=['c1','c2'],index=['i1','i2','i3'])
In [193]: df2.sum(axis='myaxis2')
dtype: float64
In [194]: clear_axis_alias(pd.DataFrame,'columns', 'myaxis2')
Creating Example Data
To create a dataframe from every combination of some given values, like R’s expand.grid()
function, we can create a dict where the keys are column names and the values are lists
of the data values:
In [195]: def expand_grid(data_dict):
rows = itertools.product(*data_dict.values())
return pd.DataFrame.from_records(rows, columns=data_dict.keys())
In [196]: df = expand_grid(
{'height': [60, 70],
'weight': [100, 140, 180],
'sex': ['Male', 'Female']})
In [197]: df

我要回帖

更多关于 fix bpmes安装配置 的文章

 

随机推荐