# SERAC

<figure><img src="https://3121926949-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FK2Vdw69n5ipoU5KHLczg%2Fuploads%2FOSn8QLl6rJWIjSvqbPK2%2F1885595898b73d44ea44d161650f1b3.png?alt=media&#x26;token=dba94a91-9dbd-4033-b6ac-a34ef8f3bd94" alt="" width="450"><figcaption><p>Paper: Memory-Based Model Editing at Scale</p></figcaption></figure>

## &#x20;SeracRewriterExecutor

> `SeracRewriteExecutor` is the class for apply Serac to your model, it uses countfactual model and scope classifier to edit models.

### init\_model

> Load the trained MEND model.

```python
init_model(self, model, tok, params: SERACHparams)
```

* **Paramters**
  * model(<mark style="color:purple;">PreTrainedModel</mark>): model to be edited
  * tok(<mark style="color:purple;">PreTrainedTokenizer</mark>): tokenizer for inputs
  * params(<mark style="color:purple;">Hyperparams</mark>): hyperparameters for editing method
* **Return Type**

### apply\_to\_model()-> PreTrainedModel

> Main function: Given the request, it applies mend to your model. Return the changed weights of the model.

<pre class="language-python"><code class="lang-python">def apply_to_model(
    self,
    model: AutoModelForCausalLM,
    tok: AutoTokenizer,
    requests: List[Dict],
    hparams: SERACHparams,
    copy=False,
    return_orig_weights=False,
    keep_original_weight=False,
<strong>    **kwargs
</strong>):
</code></pre>

* **Paramters**
  * model(<mark style="color:purple;">PreTrainedModel</mark>): model to be edited
  * tok(<mark style="color:purple;">PreTrainedTokenizer</mark>): tokenizer for inputs
  * requests(<mark style="color:purple;">List\[Dict]</mark>): The edit descriptors and targets.
  * hparams(<mark style="color:purple;">Hyperparams</mark>): hyperparameters for editing method
  * copy(<mark style="color:purple;">bool</mark>): whether to copy original model
  * return\_orig\_weights(<mark style="color:purple;">bool</mark>): whether to return the weights of original model
  * keep\_original\_weight(<mark style="color:purple;">bool</mark>): whether to edit sequentially
    * `False`: edit sequentially(because the original weight is not maintained after each edit)
    * `True`: not edit sequentially
* **Return Type**
  * edited\_model(<mark style="color:purple;">PreTrainedModel</mark>): model weights after editing

### Example

```python
hparams = SERACHyperaParams.from_hparams("llama-7b.yaml")
editor = BaseEditor.from_hparams(hparams)
prompts = ['What university did Watts Humphrey attend?',
    'Which family does Ramalinaceae belong to',
    'What role does Denny Herzig play in football?'
]
target_new = ['University of Michigan',
    'Lamiinae',
    'winger'
]
metrics, edited_model, _ = editor.edit(
    prompts=prompts,
    target_new=target_new,
    keep_original_weight=True
)
```
