public void GetOperation(Fn function, out Operation Operation, out Modifier Modifier, out Mode ModeA, out Mode ModeB) { FnHolder holder = resolv[function]; Operation = holder.Operation; Modifier = holder.Modifier; ModeA = holder.ModeA; ModeB = holder.ModeB; }
public static Rope <byte> serialize(ISerializer <A> aSer, Fn <B, A> mapper, B b) => aSer.serialize(mapper(b));
public Option <DeserializeInfo <B> > flatMapTry <B>(Fn <A, B> mapper) { try { return(new DeserializeInfo <B>(mapper(value), bytesRead).some()); } catch (Exception) { return(Option <DeserializeInfo <B> > .None); } }
public ISubscription register <Obj, A>(string name, HasObjFn <Obj> objOpt, Fn <Obj, A> run) => register(name, objOpt, obj => Future.successful(run(obj)));
public static ISerializedRW <B> map <A, B>( this ISerializedRW <A> aRW, Fn <A, Option <B> > deserializeConversion, Fn <B, A> serializeConversion ) => new MappedRW <A, B>(aRW, serializeConversion, deserializeConversion);
/* Do thing every frame until f returns false. */ public static Coroutine EveryFrame(MonoBehaviour behaviour, Fn <bool> f) { var enumerator = EveryWaitEnumerator(null, f); return(new Coroutine(behaviour, enumerator)); }
private static Stack Build(Config config) { var stack = new Stack { Description = "Base stack" }; stack.Parameters.Add("Environment", new Parameter { Type = "String", MinLength = 3, AllowedValues = new List <string> { "test", "uat", "prod" } }); stack.Resources.Add("CloudFormationServiceRole", new Role { Path = "/", AssumeRolePolicyDocument = new PolicyDocument { Version = "2012-10-17", Statement = new List <Statement> { new Statement { Effect = "Allow", Principal = new { Service = "cloudformation.amazonaws.com" }, Action = "sts:AssumeRole" } } }, Policies = new List <Policy> { new Policy { PolicyName = "CloudformationExecutionPolicy", PolicyDocument = new PolicyDocument { Version = "2012-10-17", Statement = new List <Statement> { new Statement { Resource = "*", Effect = "Allow", Action = "cloudwatch:*" }, new Statement { Resource = "*", Effect = "Allow", Action = "logs:*" }, new Statement { Resource = "*", Effect = "Allow", Action = "apigateway:*" }, new Statement { Resource = "*", Effect = "Allow", Action = "route53:*" }, new Statement { Resource = "arn:aws:sns:*:*:*", Effect = "Allow", Action = "sns:ListTopics" }, new Statement { Resource = "*", Effect = "Allow", Action = "lambda:*" }, new Statement { Resource = "*", Effect = "Allow", Action = "cognito-idp:*" }, new Statement { Resource = "*", Effect = "Allow", Action = "cognito-identity:*" }, new Statement { Resource = $"arn:aws:sns:*:*:{config.Stack}-*", Effect = "Allow", Action = "sns:*" }, new Statement { Resource = $"arn:aws:s3:::{config.Stack}-*", Effect = "Allow", Action = "s3:*" }, new Statement { Resource = $"arn:aws:kinesis:*:*:stream/{config.Stack}-*", Effect = "Allow", Action = "kinesis:*" }, new Statement { Resource = $"arn:aws:firehose:*:*:deliverystream/{config.Stack}-*", Effect = "Allow", Action = "firehose:*" }, new Statement { Effect = "Allow", Action = "iam:*", Resource = new[] { $"arn:aws:iam::*:role/{config.Stack}-*", $"arn:aws:iam::*:policy/{config.Stack}-*" } } } } } } }); stack.Add("DeploymentsBucket", new Humidifier.S3.Bucket { BucketName = Fn.Sub("${AWS::StackName}-deployments"), }); stack.Outputs.Add("DeploymentsBucket", new Output { Value = Fn.Ref("DeploymentsBucket"), Export = new { Name = Fn.Sub("${AWS::StackName}-" + "DeploymentsBucket") } }); stack.Outputs.Add("CloudFormationServiceRole", new Output { Value = Fn.GetAtt("CloudFormationServiceRole", "Arn"), Export = new { Name = Fn.Sub("${AWS::StackName}-" + "CloudFormationServiceRole") } }); return(stack); }
public static V getOrElse <K, V>( this IDictionary <K, V> dict, K key, Fn <V> orElse ) => dict.TryGetValue(key, out var outVal) ? outVal : orElse();
TransformForward( double[] samples1, double[] samples2, out double[] fftReal1, out double[] fftImag1, out double[] fftReal2, out double[] fftImag2 ) { if (samples1.Length != samples2.Length) { throw new ArgumentException(Resources.ArgumentVectorsSameLengths, "samples2"); } if (Fn.CeilingToPowerOf2(samples1.Length) != samples1.Length) { throw new ArgumentException(Resources.ArgumentPowerOfTwo, "samples1"); } int numSamples = samples1.Length; int length = numSamples << 1; // Pack together to one complex vector double[] complex = new double[length]; for (int i = 0, j = 0; i < numSamples; i++, j += 2) { complex[j] = samples1[i]; complex[j + 1] = samples2[i]; } // Transform complex vector _fft.DiscreteFourierTransform(complex, true, _convention); // Reconstruct data for the two vectors by using symmetries fftReal1 = new double[numSamples]; fftImag1 = new double[numSamples]; fftReal2 = new double[numSamples]; fftImag2 = new double[numSamples]; double h1r, h2i, h2r, h1i; fftReal1[0] = complex[0]; fftReal2[0] = complex[1]; fftImag1[0] = fftImag2[0] = 0d; for (int i = 1, j = 2; j <= numSamples; i++, j += 2) { h1r = 0.5 * (complex[j] + complex[length - j]); h1i = 0.5 * (complex[j + 1] - complex[length + 1 - j]); h2r = 0.5 * (complex[j + 1] + complex[length + 1 - j]); h2i = -0.5 * (complex[j] - complex[length - j]); fftReal1[i] = h1r; fftImag1[i] = h1i; fftReal1[numSamples - i] = h1r; fftImag1[numSamples - i] = -h1i; fftReal2[i] = h2r; fftImag2[i] = h2i; fftReal2[numSamples - i] = h2r; fftImag2[numSamples - i] = -h2i; } }
/* Do async WWW request, but only do one WWW request at a time - there was an IL2CPP bug where * having several WWWs executing at once crashed the runtime. */ public static Future <Either <WWWError, WWW> > oneAtATimeWWW(Fn <WWW> createWWW) { return(wwwsQueue.query(createWWW)); }
private void analysisBtnFunc_OnClick(object sender, EventArgs e) { Fn selected_function = RuleParserInputs.Fns[BtnListFunctions.IndexOf((Button)sender)]; SendMessage(Application.OpenForms[0].Handle, WM_ANALYSISFUNCSELECT, (IntPtr)(int)selected_function, IntPtr.Zero); }
/* Do thing every X seconds until f returns false. */ public static Coroutine EveryXSeconds(float seconds, MonoBehaviour behaviour, Fn <bool> f) { var enumerator = EveryWaitEnumerator(new WaitForSeconds(seconds), f); return(new Coroutine(behaviour, enumerator)); }
/* Do thing every X seconds until f returns false. */ public static Coroutine EveryXSeconds(float seconds, GameObject go, Fn <bool> f) { return(EveryXSeconds(seconds, coroutineHelper(go), f)); }
/* Do thing every X seconds until f returns false. */ public static Coroutine EveryXSeconds(float seconds, Fn <bool> f) { return(EveryXSeconds(seconds, behaviour, f)); }
public static void Call (Fn f) { f(null); }
TransformForward( double[] samples, out double[] fftReal, out double[] fftImag ) { if (Fn.CeilingToPowerOf2(samples.Length) != samples.Length) { throw new ArgumentException(Resources.ArgumentPowerOfTwo, "samples"); } int length = samples.Length; int numSamples = length >> 1; double expSignConvention = (_convention & TransformationConvention.InverseExponent) > 0 ? -1d : 1d; // Transform odd and even vectors (packed as one complex vector) // We work on a copy so the original array is not changed. double[] complex = new double[length]; for (int i = 0; i < complex.Length; i++) { complex[i] = samples[i]; } _fft.DiscreteFourierTransform(complex, true, _convention); // Reconstruct data for the two vectors by using symmetries double theta = Constants.Pi / numSamples; double wtemp = Trig.Sine(0.5 * theta); double wpr = -2.0 * wtemp * wtemp; double wpi = expSignConvention * Trig.Sine(theta); double wr = 1.0 + wpr; double wi = wpi; fftReal = new double[length]; fftImag = new double[length]; double h1r, h2i, h2r, h1i; fftImag[0] = fftImag[numSamples] = 0d; fftReal[0] = complex[0] + complex[1]; fftReal[numSamples] = complex[0] - complex[1]; for (int i = 1, j = 2; j <= numSamples; i++, j += 2) { h1r = 0.5 * (complex[j] + complex[length - j]); h1i = 0.5 * (complex[j + 1] - complex[length + 1 - j]); h2r = 0.5 * (complex[j + 1] + complex[length + 1 - j]); h2i = -0.5 * (complex[j] - complex[length - j]); fftReal[i] = h1r + wr * h2r + wi * h2i; fftImag[i] = h1i + wr * h2i - wi * h2r; fftReal[numSamples - i] = h1r - wr * h2r - wi * h2i; fftImag[numSamples - i] = -h1i + wr * h2i - wi * h2r; // For consistency and completeness we also provide the // negative spectrum, even though it's redundant in the real case. fftReal[numSamples + i] = fftReal[numSamples - i]; fftImag[numSamples + i] = -fftImag[numSamples - i]; fftReal[length - i] = fftReal[i]; fftImag[length - i] = -fftImag[i]; wr = (wtemp = wr) * wpr - wi * wpi + wr; wi = wi * wpr + wtemp * wpi + wi; } }
public static AnalyzerData analyze( string fileName, Fn<TypeDefinition, Option<IEntryPoint>> entryPointLookuper, AnalyserLogger log ) { var assembly = AssemblyDefinition.ReadAssembly(fileName); return analyze(assembly, entryPointLookuper, log); }
public static A with <A>(Fn <System, A> f) { using (var sys = new System(new AndroidJavaClass("java.lang.System"))) { return(f(sys)); } }
public static Fn <A, C> andThen <A, B, C>(this Fn <A, B> f, Fn <B, C> f1) => value => f1(f(value));
public static ASyncNAtATimeQueue <Params, Return> a <Params, Return>( Fn <Params, Future <Return> > execute, ushort maxTasks = 1 ) => new ASyncNAtATimeQueue <Params, Return>(maxTasks, execute);
public ISubscription register <A>(string name, Fn <Future <A> > run) => register(name, unitSomeFn, _ => run());
public static Future <A> runOnUI <A>(Fn <A> f) => Future <A> .async(promise => runOnUI(() => { var ret = f(); ASync.OnMainThread(() => promise.complete(ret)); }));
public static IDeserializer <B> map <A, B>( this IDeserializer <A> a, Fn <A, Option <B> > mapper ) => new MappedDeserializer <A, B>(a, mapper);
public static A runOnUIBlocking <A>(Fn <A> f) => SyncOtherThreadOp.a(AndroidUIThreadExecutor.a(f)).execute();
public MappedSerializer(ISerializer <A> aSerializer, Fn <B, A> mapper) { this.aSerializer = aSerializer; this.mapper = mapper; }
public static AndroidUIThreadExecutor <A> a <A>(Fn <A> code) => new AndroidUIThreadExecutor <A>(code);
public MappedDeserializer(IDeserializer <A> aDeserializer, Fn <A, Option <B> > mapper) { this.aDeserializer = aDeserializer; this.mapper = mapper; }
public static Future <Option <To> > mapO <From, To>( this Future <Option <From> > future, Fn <From, To> mapper ) { return(future.map(opt => opt.map(mapper))); }
public static ISerializer <B> map <A, B>( this ISerializer <A> a, Fn <B, A> mapper ) => new MappedSerializer <A, B>(a, mapper);
public static Future <Either <Err, To> > mapE <From, To, Err>( this Future <Either <Err, From> > future, Fn <From, To> mapper ) { return(future.map(e => e.mapRight(mapper))); }
public Closure(Fn fn) { this.fn = fn; }
public static IObservableQueue <A, C> createQueue <A, C>( Act <A> addLast, Action removeFirst, Fn <int> count, Fn <C> collection, Fn <A> first, Fn <A> last ) => new ObservableLambdaQueue <A, C>( addLast, removeFirst, count, collection, first, last );
// init_magics() computes all rook and bishop attacks at startup. Magic // bitboards are used to look up attacks of sliding pieces. As a reference see // chessprogramming.wikispaces.com/Magic+Bitboards. In particular, here we // use the so called "fancy" approach. private static void init_magics( int Pt, ulong[][] attacks, ulong[] magics, ulong[] masks, int[] shifts, int[] deltas, Fn index) { ulong edges, b; int i, size, booster; for (var s = SquareC.SQ_A1; s <= SquareC.SQ_H8; s++) { // Board edges are not considered in the relevant occupancies edges = ((Constants.Rank1BB | Constants.Rank8BB) & ~rank_bb_S(s)) | ((Constants.FileABB | Constants.FileHBB) & ~file_bb_S(s)); // Given a square 's', the mask is the bitboard of sliding attacks from // 's' computed on an empty board. The index must be big enough to contain // all the attacks for each possible subset of the mask and so is 2 power // the number of 1s of the mask. Hence we deduce the size of the shift to // apply to the 64 or 32 bits word to get the index. masks[s] = sliding_attack(deltas, s, 0) & ~edges; #if X64 shifts[s] = 64 - Bitcount.popcount_1s_Max15(masks[s]); #else shifts[s] = 32 - Bitcount.popcount_1s_Max15(masks[s]); #endif // Use Carry-Rippler trick to enumerate all subsets of masks[s] and // store the corresponding sliding attack bitboard in reference[]. b = 0; size = 0; do { occupancy[size] = b; reference[size++] = sliding_attack(deltas, s, b); b = (b - masks[s]) & masks[s]; } while (b != 0); // Set the offset for the table of the next square. We have individual // table sizes for each square with "Fancy Magic Bitboards". #if X64 booster = MagicBoosters[1][rank_of(s)]; #else booster = MagicBoosters[0][rank_of(s)]; #endif attacks[s] = new ulong[size]; // Find a magic for square 's' picking up an (almost) random number // until we find the one that passes the verification test. do { do { magics[s] = pick_random(rk, booster); } while (Bitcount.popcount_1s_Max15((magics[s] * masks[s]) >> 56) < 6); Array.Clear(attacks[s], 0, size); // A good magic must map every possible occupancy to an index that // looks up the correct sliding attack in the attacks[s] database. // Note that we build up the database for square 's' as a side // effect of verifying the magic. for (i = 0; i < size; i++) { var idx = index(Pt, s, occupancy[i]); var attack = attacks[s][idx]; if ((attack != 0) && attack != reference[i]) { break; } Debug.Assert(reference[i] != 0); attacks[s][idx] = reference[i]; } } while (i != size); } }
public static Act <A> andThen <A, B>(this Fn <A, B> f, Act <B> a) => value => a(f(value));
public static AnalyzerData analyze( AssemblyDefinition assembly, Fn<TypeDefinition, Option<IEntryPoint>> entryPointLookuper, AnalyserLogger log ) { var types = allTypes(assembly).ToImmutableList(); // var abstractImplementations = AbstractTypeImplementations.create(types); var entryMethods = types.SelectMany(_ => entryPointLookuper(_).asEnum) .SelectMany(ep => ep.entryMethods); return analyze(entryMethods, log); }
/* Do thing every frame until f returns false. */ public static Coroutine EveryFrame(GameObject go, Fn <bool> f) { return(EveryFrame(coroutineHelper(go), f)); }